-1

Hi!
I want to know how artificial intelligence system(s) work(s)? I want to know whether artificial intelligence depends upon software i.e special type of software having special algorithms or it depends upon special type of hardware having characteristics like human brain cortex and neuron system or both of them(software and hardware) working in harmony. I know the real world present artificial intelligence system examples like the Google Search Engine, Siri, etc. All of them are, according to my knowledge, totally software based. So, I want to know which thing(software or hardware) mostly affects the AI System or both are required. If both are required, then why at present most of AI is software based. If hardware is also required then which thing or component or you can say which thing(s) make the AI system hardware different from the ordinary hardware used in daily computing computers and how their architecture differs from the others.

I'll be grateful if you help me....!
Waiting for your assisstance......!

7
Contributors
13
Replies
56
Views
3 Years
Discussion Span
Last Post by oanahmed
Featured Replies
  • 1

    of course your entire system doesn't have to be the same physical size as the part that does the interaction. Given high enough bandwidth and transmission and processing speed, the interactive part can be the size of a bee and thousands of miles away from the computers that handle the … Read More

  • > What is intelligence anyway,.. Technically, many systems are intelligent to a degree, and often in very narrow bands. That is clearly one of the core questions that people have been trying to answer, and there is definitely no clear line separating what we would recognize as real "intelligence" and … Read More

  • 1

    Last weekend someone in Second Life asked me if I was real. I replied that I was either real or a computer so well programmed it believes it's real. The person asking the question decided based on that I must not be real... > if a AI(artifical intelligence) can harm … Read More

0

Hello,

Ateficial intelligence is a broad scope, and does not just depend on the algorithms and software based solutions as you're pointint out. There are other aspects, in terms of Robotics which contains primarly a hardware approach to solving these tasks. Understanding robotics, for example, does not really depend on understanding the software (Although, to some extent) but it's mainly in hardware and building systems to interact with the software.

I know the real world present artificial intelligence system examples like the Google Search Engine, Siri, etc. All of them are, according to my knowledge, totally software based.

Well no, not really. They all depend on the hardware to some extent. I.e. Let's look at Sirir, your device must have a way to covert the analog signal to digital signal, right? It must also store the values somewhere, right?

If hardware is also required then which thing or component or you can say which thing(s) make the AI system hardware different from the ordinary hardware used in daily computing computers

This question is too broad. You cannot assume that there is a standard what you need, and, this fits all. It just cannot work like this. It totally deponds on the problem that you're trying to face, for example, if your problem is centered around human computer interaction, then, the most likely hardware that you will need is a robot which will be human size (Although, this is not really standard) but it would need to interact with the environment that it is in so therefore this would be the ideal choice to have. If, however, you your problem was to understand different creatures in nature, for example, how particular insects avoid different obstacles then you would most likely build something a lot smaller.

Sorry if this hasn't answered your question completely, it is kind of broad. I'm sure @mike would have a much more wider scope.

1

of course your entire system doesn't have to be the same physical size as the part that does the interaction.

Given high enough bandwidth and transmission and processing speed, the interactive part can be the size of a bee and thousands of miles away from the computers that handle the actual processing and control, and those could be the size of houses.

As to how AI works, that's such a broad topic there are libraries full of books written about it, entire university courses discussing just the basics.
And it's at some levels more a philosophical than a technical topic. What is intelligence anyway, how can a machine achieve it and is it still a machine if it does, what level of autonomous action would the machine need to achieve to be considered intelligent.
Technically, many systems are intelligent to a degree, and often in very narrow bands.
A payroll system that based on its inputs decides what to pay out to whom has some level of intelligence.
A flight control computer that based on its inputs decides what to do to keep an aircraft level and on course has a level of intelligence.
The industrial robot that can detect disruptions in the flow of parts it depends on and knows to raise an alarm is intelligent to a degree.
The NPC in a videogame that calculates how to approach the player avatar displays a degree of intelligence.
All are different examples of AI, all are different aspects of the field.
All use different combinations of hardware and software to determine their actions.

0

Can anyone please recommend a book(free) on basics of artificial intelligence please...it may not necesarily be a practical programming approach, i just need the idea and the instinct on how to model and create algorithms for AI

2

What is intelligence anyway,.. Technically, many systems are intelligent to a degree, and often in very narrow bands.

That is clearly one of the core questions that people have been trying to answer, and there is definitely no clear line separating what we would recognize as real "intelligence" and what is simply a smart solution or sophisticated program. There seem to be a few critical components that are the distinguishing factors: high-level reasoning, learning and situational awareness. I think that any system that lack all of these cannot really be considered AI, and one that has all three is definitely very close to being really "intelligent".

In the department of high-level reasoning (a.k.a. "cognitive science"), the areas of research that are being pursued there are things like probabilistic computing (and theory), Bayesian inference, Markov decision processes (and POMDP), game theory, and related areas. The emerging consensus right now is that approximation is good, fuzziness is good. Reasoning means understanding what is going on and predicting what will happen (possibly, based on one's own decisions), and doing that exactly is impossible (intractable), even we (humans) don't do that. This is why probabilistic approaches are much more powerful, because you can quickly compute a most likely guess, and some rough measure of how uncertain that guess is, and then base your decisions on that. You can see evidence of that with Watson, as he (it?) always answers with some confidence percentage, and if it's too low, he (it?) doesn't answer.

In the learning department, you have to look at the field of machine learning. There are many many algorithms out there, and they are heavily used for things like data-analytics, e.g., systems like book suggestions on Amazon, or traffic analysis, or financial analytics (predicting market behavior). The idea here is that if you have lots of data about a particular complex system or phenomenon, and you have some rough model (or not) of how it works, you can "teach" a piece of software to understand that system. Algorithms for this vary greatly, from supervised to unsupervised, from model-based to sample-based, and so on, but they are nearly all based on (or analysed with) probability theory and things like Bayes' rule. If there is one piece of mathematics that you really need to learn to do AI, it is probability theory (and information theory).

In the situational awareness department, and in particular, the task of trying to infer what another agent is up to during an interaction, the problem really boils down to trying to combine the last two items (reasoning and learning) to try to learn about the reasoning of others. This is what is needed to pass the Turing test. I think that systems like Siri are starting to be able to do some impressive things in that department. In general, as far as I know, this is mostly handled via hidden Markov models (HMM), where the "state" of the system would be the goals or intentions of the agent (e.g., human) that is being observed, and every interaction with that agent serves to clarify what those are. For example, if you tell Siri "I'm hungry", it might start to infer that you might be looking for a restaurant, and might ask for clarification like "do you want me to locate a restaurant", and so on.

I want to know which thing(software or hardware) mostly affects the AI System or both are required. If both are required, then why at present most of AI is software based.

Currently, it is mostly software. The reason is mainly that AI (broadly speaking) is still a very early science. The main reason people haven't done too much in the hardware department is really because there is no definite consensus about what kind of hardware would really be needed. All these areas that I just mentioned and the thousands of algorithms that exist all contribute small pieces to a very large puzzle, and none of them, so far, seem to really be able to capture everything very well, i.e., they all have limitations or narrow application areas. If you are going to build special hardware, i.e., an "artificial brain", you really want to make sure that the architecture you designed is sort of "Turing complete" (I mean, the AI equivalent of Turing complete, something like "can reproduce the behavior of any intelligent agent", which is still to be defined, i.e., we don't have an exact idea of what that is). So, for now, people are just testing things in software, but there many algorithms in AI that would definitely gain significantly from being implemented in hardware directly, because current computer architectures are not very suitable to run these algorithms, but we make do.

If hardware is also required then which thing or component or you can say which thing(s) make the AI system hardware different from the ordinary hardware used in daily computing computers and how their architecture differs from the others.

Mainly, it's the parallelism and the probabilistic nature of it. In short, it's the whole "a calculator doesn't think" problem. To reason and learn, and all that, you don't need precise calculations and you don't need precise sequencing of operations. What you need is fast probability estimates and a fuzzy mixing of the results. It is not clear, at the moment, how best to do those things. One thing is for sure, doing less precise calculations with less requirement on the mixing or sequencing should require less computing power than an equivalent "exact" calculation. However, with existing hardware (normal computers), emulating these probabilistic and fuzzy calculations is actually far more work (exponentially more) than doing the exact calculations. That's where the motivation for more suitable computing hardware comes from.

The main development, currently, in the hardware department is IBM's cognitive computing work, and especially, their neurosynaptic chips, which are, indeed, highly parallel, probabilistic computing chips.

Edited by mike_2000_17: more text

0

Ok man thats so cool but I want to ask you that if a AI(artifical intelligence) can harm us humans if it become to advanced

1

Last weekend someone in Second Life asked me if I was real. I replied that I was either real or a computer so well programmed it believes it's real.
The person asking the question decided based on that I must not be real...

if a AI(artifical intelligence) can harm us humans if it become to advanced

That would depend on how it is programmed and what it's programmed to do, obviously.
The control software in a guided missile can certainly harm humans, but it's not programmed with that intent. It's programmed to destroy its target, not caring or knowing what the target is except for the signature it perceives through its sensors.
Another program might be created to determine that some signature indicates entitities that should not be harmed, even entities that should be protected from harm. And then that program would have the rudiments of Asimov's 3 laws. You can see the beginnings of that in the safeguards built into many industrial robots, which tend to be programmed to shut down when the infra red signatures of human beings (but it doesn't know that's humans of course) enter within operating range of its robotic arms in order to prevent industrial accidents.

Edited by jwenting

-1

As we all know that science has making new inventions day by day and if it makes a AI that has all the information of the world meaning has all the info all the world and if it is hacked by anyone person or another what will happen?

1

@mike
I hate Hidden Markov models, but, glad you brought it up.. Surely, the Hidden Markov model (in any case) is predicting the next step, so taking the observations, from the user.. I don't think your example is much of a problem in HMM's but more in cross-correlation techniques, "I'm hungry", "Resturant" whereas the HMM builds a probablistic model of such a representation and uses (viterbi) to decode the most likely path and thus provides a probability to the next stage. Correct me if I'm wrong. Nice post though. +1

I know, this is mostly handled via hidden Markov models (HMM), where the "state" of the system would be the goals or intentions of the agent (e.g., human) that is being observed, and every interaction with that agent serves to clarify what those are. For example, if you tell Siri "I'm hungry", it might start to infer that you might be looking for a restaurant, and might ask for clarification like "do you want me to locate a restaurant", and so on.

0

Thanks for all your help....
I read articles about artificial intelligence and learned about neuromorphic chips. From this article, when I read it, I was very surprised. The chips do not need programming for their working, they will learn through their experience. For example, under the Machine Learning headings' accompanying paragraph it is written that a neural based chip has been invented at** HRL Laboratories** that does not require programming. They made it to play a virtual(computer video game) ping pong game without programming it. It's like a miracle but, to me it seems a little bit confusing. How the chip knows where the ball is and when it has hit the ball or whether it has missed it or not and where is the paddle and when and where to move it. The machine do learns with experience but, How? I little bit understand that it learns via the artificial neural network combinations it create in respond to input. But, the most and biggest thing pounding in my brain is not* "How the machine gains experiene?"* rather How it knows where the ball is, etc. as I have mentioned above. Moreover, according to my point of view, I don't think that these chips will be much good for the tasks performed by von-Neumann machines i.e such machines will not be good at crunching, yanking and throwing numbers.

Awaiting for your assisstance....!

Edited by oanahmed

0

The machine do learns with experience but, How? I little bit understand that it learns via the artificial neural network combinations it create in respond to input.

This is called reinforcement learning. This is completely independent of whether you use neural networks or not. Neural networks to have some features that make them suitable for that, e.g., back-propagation learning that can be adapted for reinforcement learning. But reinforcement learning generalizes beyond any particular method you choose as the input-output mapping.

The point is that you insert your input-output mapping into some situation (usually simulated) where the "agent" gets some input about what's going on in the environment (e.g., position of the ball, position of opposing paddle, etc.) and outputs some "action" on the environment (e.g., position of it's own paddle). And then, the training is done by playing many many games (many trials, many simulations, etc..) and at every "game", a reward is given based on how successful the game was for the agent (e.g., 1: win, 0: lose). So, that's how the problem is set up.

At that point, you pick some method to compute the output (moving the paddle) from the given input (position of ball), and you make sure that this method has sufficient complexity and adaptable parameters to be able to re-create complex "emerging" behaviors or strategies. One option for that is a neural network, but it is far from being the only option. Then, you have to find a way to use the rewards that you get to somehow correct or reinforce the "connections" or parameters that made that successful event (win) happen. With enough trials, you will have reinforced enough of the successful events (and punished the unsuccessful ones) that parameters (or connections) will settle to a particular set of values which, together, forms a very successful strategy overall.

How the chip knows where the ball is and when it has hit the ball or whether it has missed it or not and where is the paddle and when and where to move it.

Ok, so, take us (humans) as an example. We learn through reinforcement learning, as I described it above. All those questions about "how it knows where the ball is", and so on, are all mute points. As humans, we are born with eyes, ears, a sense of touch, muscles, and all that. In other words, we are born with a set of inputs and outputs that are hard-wired into our brain, through our nervous system. All the learning goes on from that point on, i.e., learning how to live successfully in this world given what we can sense about it, and how we can affect it, and what makes us happy (the reinforcement, like feeling joy, not being hungry, reproducing, etc.).

Well, when we talk about chips that can learn to do things, the situation is essentially the same, they are hard-wired (via conventional electronics, computers or software) with a set of inputs, outputs, and "goals" (rewards), and the learning is everything that happens in between, just like our brain does it (or learns to do it). Our existing hardware and software does a great job at all the "hard-wiring" work, it's the intelligence part that is missing, so that's what machine learning and AI concentrates on.

0

So mike_2000_17 you mean that by using some chips there is no more need to use software?

This topic has been dead for over six months. Start a new discussion instead.
Have something to contribute to this discussion? Please be thoughtful, detailed and courteous, and be sure to adhere to our posting rules.