How to make money with Affiliate marketing

Image
How I made my first $10,000 my first month after Learning this internet trick IMPORTANT MESSAGE: the full video of this program is on a special page...  Just << CLICK HERE >> to go over and watch this secret video before it gets taken down.   Here are my screenshot

What is artificial intelligence, types and uses/benefits of artificial intelligence

WHAT IS ARTIFICIAL INTELLIGENCE

artificial intelligence can simply be defined as  the ability of a digital computer or computer-controlled robot to perform tasks commonly associated with intelligent beings. The term is frequently applied to the project of developing systems endowed with the intellectual processes characteristic of humans, such as the ability to reason, discover meaning, generalize, or learn from past experience. Since the development of the digital computer in the 1940s, it has been demonstrated that computers can be programmed to carry out very complex tasks—as, for example, discovering proofs for mathematical theorems or playing chess—with great proficiency. Still, despite continuing advances in computer processing speed and memory capacity, there are as yet no programs that can match human flexibility over wider domains or in tasks requiring much everyday knowledge. On the other hand, some programs have attained the performance levels of human experts and professionals in performingin specific tasks, so that artificial intelligence deiverse as medical diagnosis, computer search engines, and voice or handwriting recognitiAmerican ompanies possessed about two-thirds of investments in artificial intelligence.

OR
Artificial intelligence can also be seen as a  wide-ranging branch of computer science concerned with building smart machines capable of performing tasks that typically require human intelligence.AI is an interdisciplinary science with multiple approaches, but advancements in machine learning and deep learning are creating a paradigm shift in virtually every sector of the tech industry.

WORKING PRINCIPLE OF ARTIFICIAL INTELLIGENCE
After breaking the Nazi encryption machine Enigma and helping the Allied Forces win World War II, mathematician Alan Turing changed history a second time with a simple question: "Can machines think?"
Turing's paper "Computing Machinery and Intelligence" (1950), and it's subsequent Turing Test, established the fundamentalgoal and vision of artificial intelligence. 
At it's core, AI is the branch ofcomputerscience that aims to answer Turing's question in the affirmative. It is the endeavor to replicate or simulate human intelligence in machines.
The expansive goal of artificial intelligence has given rise to many questions and debates. four different approaches that have historically defined the field of AI:

by Thinking humanly
by Thinking rationally
by Acting humanly
and lastly byby ActingActing rationally

The first two ideas concern thought processes and reasoning, while the others deal with behavior. Norvig and Russell focus particularly on rational agents that act to achieve the best outcome, noting "all the skills needed for the Turing Test also allow an agent to act rationally." (Russel and Norvig 4).

Patrick Winston, the Ford professor of artificial intelligence and computer science at MIT, defines AIas  "algorithms enabled by constraints, exposed by representations that support models targeted at loops that tie thinking, perception and action together."

While these definitions may seem abstract to the average person, they help focus the field as an area of computer science and provide a blueprint for infusing machines and programs with machine learning and other subsets of artificial intelligence.

While addressing a crowd at the Japan AI Experience in 2017,  DataRobot CEO Jeremy Achin began his speech by offering the following definition of how AI is used today:

"AI is a computer system able to perform tasks that ordinarily require human intelligence... Many of these artificial intelligence systems are powered by machine learning, some of them are powered by deep learning and some of them are powered by very boring things like rules."

BENEFITS DERIVED FROM AI

NARROW AI
 Sometimes referred to as "Weak AI," this kind of artificial intelligence operates within a limited context and is a simulation of human intelligence. Narrow AI is often focused on performing a single task extremely well and while these machines may seem intelligent, they are operating under far more constraints and limitations than even the most basic human intelligence.
ARTIFICIAL GENERAL INTELLIGENCE (AGI): AGI, sometimes referred to as "Strong AI," is the kind of artificial intelligence we see in the movies, like the robots from Westworld or Data from Star Trek: The Next Generation. AGI is a machine with general intelligence and, much like a human being, it can apply that intelligence to solve any problem.

Narrow AI is all around us and is easily the most successful realization of artificial intelligence to date. With its focus on performing specific tasks, Narrow AI has experienced numerous breakthroughs in the last decade that have had "significant societal benefits and have contributed to the economic vitality of the nation," according to "Preparing for the Future of Artificial Intelligence," a 2016 report released by the Obama Administration.

EXAMPLES OF NARROW AI INCLUDE:

1.Google search
2.Image recognition software
3.Siri, Alexa and other personal assistants
Self-driving cars
4.IBM's Watson .

Much of Narrow AI is powered by breakthroughs in machine learning and deep learning. Understanding the difference between artificial intelligence, machine learning and deep learning can be confusing. Venture capitalist Frank Chen provides a good overviewof how to distinguish between them, noting:

"Artificial intelligence is a set of algorithms and intelligence to try to mimic human intelligence. Machine learning is one of them, and deep learning is one of those machine learning techniques."

Simply put, machine learning feeds a computer data and uses statistical techniques to help it "learn" how to get progressively better at a task, without having been specifically programmed for that task, eliminating the need for millions of lines of written code. Machine learning consists of both supervised learning (using labeled data sets) and unsupervised learning (using unlabeled data sets).

Deep learning is a type of machine learning that runs inputs through a biologically-inspired neural network architecture. The neural networks contain a number of hidden layers through which the data is processed, allowing the machine to go "deep" in its learning, making connections and weighting input for the best results.
The creation of a machine with human-level intelligence that can be applied to any task is the Holy Grail for many AI researchers, but the quest for AI has been fraught with difficulty.
The search for a "universal algorithm for learning and acting in any environment," (Russel and Norvig 27) isn't new, but time hasn't eased the difficulty of essentially creating a machine with a full set of cognitive abilities.
has long been the muse of dystopian science fiction, in which super-intelligent robots overrun humanity, but experts agree it's not something we need to worry about anytime soon.

HISTORY OF AI

Intelligent robots and artificial beings first appeared in the ancient Greek myths of Antiquity. Aristotle's development of the syllogism and it's use of deductive reasoning was a key moment in mankind's quest to understand its own intelligence. While the roots are long and deep, the history of artificial intelligence as we think of it today spans less than a century. The following is a quick look at some of the most important events in AI.

1943

Warren McCullough and Walter Pitts publish "A Logical Calculus of Ideas Immanent in Nervous Activity." The paper proposed the first mathematic model for building a neural network.

1949

In his book The Organization of Behavior: A Neuropsychological Theory, Donald Hebb proposes the theory that neural pathways are created from experiences and that connections between neurons become stronger the more frequently they're used. Hebbian learning continues to be an important model in AI.

1950

Alan Turing publishes "Computing Machinery and Intelligence, proposing what is now known as the Turing Test, a method for determining if a machine is intelligent.

Harvard undergraduates Marvin Minsky and Dean Edmonds build SNARC, the first neural network computer.

Claude Shannon publishes the paper "Programming a Computer for Playing Chess."

Isaac Asimov publishes the "Three Laws of Robotics."

1952

Arthur Samuel develops a self-learning program to play checkers.

1954

The Georgetown-IBM machine translation experiment automatically translates 60 carefully selected Russian sentences into English.

1956

The phrase artificial intelligence is coined at the "Dartmouth Summer Research Project on Artificial Intelligence." Led by John McCarthy, the conference, which defined the scope and goals of AI, is widely considered to be the birth of artificial intelligence as we know it today.

Allen Newell and Herbert Simon demonstrate Logic Theorist (LT), the first reasoning program.

1958

John McCarthy develops the AI programming language Lisp and publishes the paper "Programs with Common Sense." The paper proposed the hypothetical Advice Taker, a complete AI system with the ability to learn from experience as effectively as humans do.

1959

Allen Newell, Herbert Simon and J.C. Shaw develop the General Problem Solver (GPS), a program designed to imitate human problem-solving.

Herbert Gelernter develops the Geometry Theorem Prover program.

Arthur Samuel coins the term machine learning while at IBM.

John McCarthy and Marvin Minsky found the MIT Artificial Intelligence Project.

1963

John McCarthy starts the AI Lab at Stanford.

1966

The Automatic Language Processing Advisory Committee (ALPAC) report by the U.S. government details the lack of progress in machine translations research, a major Cold War initiative with the promise of automatic and instantaneous translation of Russian. The ALPAC report leads to the cancellation of all government-funded MT projects.

1969

The first successful expert systems are developed in DENDRAL, a XX program, and MYCIN, designed to diagnose blood infections, are created at Stanford.

1972

The logic programming language PROLOG is created.

1973

The "Lighthill Report," detailing the disappointments in AI research, is released by the British government and leads to severe cuts in funding for artificial intelligence projects.

1974-1980

Frustration with the progress of AI development leads to major DARPA cutbacks in academic grants. Combined with the earlier ALPAC report and the previous year's "Lighthill Report," artificial intelligence funding dries up and research stalls. This period is known as the "First AI Winter."

1980

Digital Equipment Corporations develops R1 (also known as XCON), the first successful commercial expert system. Designed to configure orders for new computer systems, R1 kicks off an investment boom in expert systems that will last for much of the decade, effectively ending the first "AI Winter."

1982

Japan's Ministry of International Trade and Industry launches the ambitious Fifth Generation Computer Systems project. The goal of FGCS is to develop supercomputer-like performance and a platform for AI development.

1983

In response to Japan's FGCS, the U.S. government launches the Strategic Computing Initiative to provide DARPA funded research in advanced computing and artificial intelligence.

1985

Companies are spending more than a billion dollars a year on expert systems and an entire industry known as the Lisp machine market springs up to support them. Companies like Symbolics and Lisp Machines Inc. build specialized computers to run on the AI programming language Lisp.

1987-1993

As computing technology improved, cheaper alternatives emerged and the Lisp machine market collapsed in 1987, ushering in the "Second AI Winter." During this period, expert systems proved too expensive to maintain and update, eventually falling out of favor.

Japan terminates the FGCS project in 1992, citing failure in meeting the ambitious goals outlined a decade earlier.

DARPA ends the Strategic Computing Initiative in 1993 after spending nearly $1 billion and falling far short of expectations.

1991

U.S. forces deploy DART, an automated logistics planning and scheduling tool, during the Gulf War.

1997

IBM's Deep Blue beats world chess champion Gary Kasparov

2005

STANLEY, a self-driving car, wins the DARPA Grand Challenge.

The U.S. military begins investing in autonomous robots like Boston Dynamic's "Big Dog" and iRobot's "PackBot."

2008

Google makes breakthroughs in speech recognition and introduces the feature in its iPhone app.

2011

IBM's Watson trounces the competition on Jeopardy!.

2012

Andrew Ng, founder of the Google Brain Deep Learning project, feeds a neural network using deep learning algorithms 10 million YouTube videos as a training set. The neural network learned to recognize a cat without being told what a cat is, ushering in breakthrough era for neural networks and deep learning funding.

2014

Google makes first self-driving car to pass a state driving test.

2016

Google Deep Mind's Alpha Go defeats world champion Go player Lee Sedol. The complexity of the ancient Chinese game was seen as a major hurdle to clear in AI.

MEANING OF INTELLIGENCE
Generally human intelligence is not characterized by just one trait but by the combination of many diverse abilities. Research in AI has focused chiefly on the following components of intelligence: learning, reasoning, problem solving, perception, and using language.
SOME OF THE USES OF AI ARE
LEARNING

when it comes to learning in any  given condition, time or environment, there are good number of different forms of learning as applied to artificial intelligence. The simplest is learning by trial and error. For example, a simple computer program for solving mate-in-one chess problems might try moves at random until mate is found. The program might then store the solution with the position so that the next time the computer encountered the same position it would recall the solution. This simple memorizing of individual items and procedures—known as rote learning—is relatively easy to implement on a computer. More challenging is the problem of implementing what is called generalization. Generalization involves applying past experience to analogousnew situations. For example, a program that learns the past tense of regular English verbs by rote will not be able to produce the past tense of a word such as jump unless it previously had been presented with jumped, whereas a program that is able to generalize can learn the “add ed” rule and so form the past tense of jump based on experience with similar verbs.
REASONING

Reasoning means to  draw inferences appropriate to the situation. Inferences are classified as either deductive or inductive. An example of the former is,  Monday must be in either the classroom or the toilet. He is not in the classroom, therefore he is in the toilet,” and of the latter, “previous attempt of  accidents of this sort were caused by drinking while driving, which is carelessness; therefore this accident was caused by carelessness.” The most significant difference between these forms of reasoning is that in the deductive case the truth of the premises guarantees the truth of the conclusion, whereas in the inductive case the truth of the premise lends support to the conclusion without giving absolute assurance. Inductive reasoning is common in science, where data are collected and tentative models are developed to describe and predict future behaviour—until the appearance of anomalous data forces the model to be revised. Deductive reasoning is common in mathematics and logic, where elaborate structures of irrefutable theorems are built up from a small set of basic axioms and rules.

There has been considerable success in programming computers to draw inferences, especially deductive inferences. However, true reasoning involves more than just drawing inferences; it involves drawing inferences relevant to the solution of the particular task or situation. This is one of the hardest problems confronting AI.

AI HAS LED TO SOLVING PROBLEMS

Problem solving, particularly in artificial intelligence, may be characterized as a systematic search through a range of possible actions in order to reach some predefined goal or solution. Problem-solving methods divide into special purpose and general purpose. A special-purpose method is tailor-made for a particular problem and often exploits very specific features of the situation in which the problem is embedded. In contrast, a general-purpose method is applicable to a wide variety of problems. One general-purpose technique used in AI is means-end analysis—a step-by-step, or incremental, reduction of the difference between the current state and the final goal.
Many diverse problems have been solved by artificial intelligence programs. Some examples are finding the winning move (or sequence of moves) in a board game, devising mathematical proofs, and manipulating “virtual objects” in a computer-generated world.

PERCEPTION AND THOUGHTS

In perception the environment is scanned by means of various sensory organs, real or artificial, and the scene is decomposed into separate objects in various spatial relationships. Analysis is complicated by the fact that an object may appear different depending on the angle from which it is viewed, the direction and intensity of illumination in the scene, and how much the object contrasts with the surrounding field.
At present, artificial perception is sufficiently well advanced to enable optical sensors to identify individuals, autonomous vehicles to drive at moderate speeds on the open road, and robots to roam through buildings collecting empty soda cans.

LANGUAGE

A language is a system of signs having meaning by convention. In this sense, language need not be confined to the spoken word. Traffic signs, for example, form a minilanguage, it being a matter of convention that  means “hazard ahead” in some countries. It is distinctive of languages that linguistic units possess meaning by convention, and linguistic meaning is very different from what is called natural meaning, exemplified in statements such as “Those clouds mean rain” and “The fall in pressure means the valve is malfunctioning.”

An important characteristic of full-fledged human languages—in contrast to birdcalls and traffic signs—is their productivity. A productive language can formulate an unlimited variety of sentences.

It is relatively easy to write computer programs that seem able, in severely restricted contexts, to respond fluently in a human language to questions and statements. Although none of these programs actually understands language, they may, in principle, reach the point where their command of a language is indistinguishable from that of a normal human.

Comments

Popular posts from this blog

WHAT IS A DIRECT DATA ENTRY DEVICE?

direct data entry devices and their examples

Oil prices reduces below zero