Artificial Intelligence (AI)
Artificial Intelligence (AI) Artificial intelligence (AI) is the intelligence of machines or software, as opposed to the intelligence of other living beings, primarily of humans. It is a field of study in computer science that develops and studies intelligent machines. Such machines may be called AIs.
AI technology is widely used throughout industry, government, and science. Some high-profile applications are: advanced web search engines (e.g., Google Search), recommendation systems (used by YouTube, Amazon, and Netflix), interacting via human speech (such as Google Assistant, Siri, and Alexa), self-driving cars (e.g., Waymo), generative and creative tools (ChatGPT and AI art), and superhuman play and analysis in strategy games (such as chess and Go).[1]
Alan Turing was the first person to conduct substantial research in the field that he called Machine Intelligence.[2] Artificial intelligence was founded as an academic discipline in 1956.[3] The field went through multiple cycles of optimism[4][5] followed by disappointment and loss of funding.[6][7] Funding and interest vastly increased after 2012 when deep learning surpassed all previous AI techniques,[8] and after 2017 with the transformer architecture.[9] This led to the AI spring of the early 2020s, with companies, universities, and laboratories overwhelmingly based in the United States pioneering significant advances in artificial intelligence.[10]
The various sub-fields of AI research are centered around particular goals and the use of particular tools. The traditional goals of AI research include reasoning, knowledge representation, planning, learning, natural language processing, perception, and support for robotics.[a] General intelligence (the ability to complete any task performable by a human) is among the field's long-term goals.[11]
To solve these problems, AI researchers have adapted and integrated a wide range of problem-solving techniques, including search and mathematical optimization, formal logic, artificial neural networks, and methods based on statistics, operations research, and economics.[b] AI also draws upon psychology, linguistics, philosophy, neuroscience and other fields.[12]
Goals
The general problem of simulating (or creating) intelligence has been broken into sub-problems. These consist of particular traits or capabilities that researchers expect an intelligent system to display. The traits described below have received the most attention and cover the scope of AI research.[a]
Reasoning, problem-solving
Early researchers developed algorithms that imitated step-by-step reasoning that humans use when they solve puzzles or make logical deductions.[13] By the late 1980s and 1990s, methods were developed for dealing with uncertain or incomplete information, employing concepts from probability and economics.[14]
Many of these algorithms are insufficient for solving large reasoning problems because they experience a "combinatorial explosion": they became exponentially slower as the problems grew larger.[15] Even humans rarely use the step-by-step deduction that early AI research could model. They solve most of their problems using fast, intuitive judgments.[16] Accurate and efficient reasoning is an unsolved problem.
Knowledge representation

Knowledge representation and knowledge engineering[17] allow AI programs to answer questions intelligently and make deductions about real-world facts. Formal knowledge representations are used in content-based indexing and retrieval,[18] scene interpretation,[19] clinical decision support,[20] knowledge discovery (mining "interesting" and actionable inferences from large databases),[21] and other areas.[22]
A knowledge base is a body of knowledge represented in a form that can be used by a program. An ontology is the set of objects, relations, concepts, and properties used by a particular domain of knowledge.[23] Knowledge bases need to represent things such as: objects, properties, categories and relations between objects;[24] situations, events, states and time;[25] causes and effects;[26] knowledge about knowledge (what we know about what other people know);[27] default reasoning (things that humans assume are true until they are told differently and will remain true even when other facts are changing);[28] and many other aspects and domains of knowledge.
History
The study of mechanical or "formal" reasoning began with philosophers and mathematicians in antiquity. The study of logic led directly to Alan Turing's theory of computation, which suggested that a machine, by shuffling symbols as simple as "0" and "1", could simulate both mathematical deduction and formal reasoning, which is known as the Church–Turing thesis.[244] This, along with concurrent discoveries in cybernetics and information theory, led researchers to consider the possibility of building an "electronic brain".[q][246]
Alan Turing was thinking about machine intelligence at least as early as 1941, when he circulated a paper on machine intelligence which could be the earliest paper in the field of AI – though it is now lost.[2] The first available paper generally recognized as "AI" was McCullouch and Pitts design for Turing-complete "artificial neurons" in 1943 – the first mathematical model of a neural network.[247] The paper was influenced by Turing's earlier paper 'On Computable Numbers' from 1936 using similar two-state boolean 'neurons', but was the first to apply it to neuronal function.[2]
The term 'Machine Intelligence' was used by Alan Turing during his life which was later often referred to as 'Artificial Intelligence' after his death in 1954. In 1950 Turing published the best known of his papers 'Computing Machinery and Intelligence', the paper introduced his concept of what is now known as the Turing test to the general public. Then followed three radio broadcasts on AI by Turing, the lectures: 'Intelligent Machinery, A Heretical Theory’, ‘Can Digital Computers Think’? and the panel discussion ‘Can Automatic Calculating Machines be Said to Think’. By 1956 computer intelligence had been actively pursued for more than a decade in Britain; the earliest AI programmes were written there in 1951–1952.[2]
In 1951, using a Ferranti Mark 1 computer of the University of Manchester, checkers and chess programs were wrote where you could play against the computer.[248] The field of American AI research was founded at a workshop at Dartmouth College in 1956.[r][3] The attendees became the leaders of AI research in the 1960s.[s] They and their students produced programs that the press described as "astonishing":[t] computers were learning checkers strategies, solving word problems in algebra, proving logical theorems and speaking English.[u][4] Artificial Intelligence laboratories were set up at a number of British and US Universities in the latter 1950s and early 1960s.[2]
They had, however, underestimated the difficulty of the problem.[v] Both the U.S. and British governments cut off exploratory research in response to the criticism of Sir James Lighthill[253] and ongoing pressure from the U.S. Congress to fund more productive projects. Minsky's and Papert's book Perceptrons was understood as proving that artificial neural networks would never be useful for solving real-world tasks, thus discrediting the approach altogether.[254] The "AI winter", a period when obtaining funding for AI projects was difficult, followed.[6]
In the early 1980s, AI research was revived by the commercial success of expert systems,[255] a form of AI program that simulated the knowledge and analytical skills of human experts. By 1985, the market for AI had reached over a billion dollars. At the same time, Japan's fifth generation computer project inspired the U.S. and British governments to restore funding for academic research.[5] However, beginning with the collapse of the Lisp Machine market in 1987, AI once again fell into disrepute, and a second, longer-lasting winter began.[7]
Many researchers began to doubt that the current practices would be able to imitate all the processes of human cognition, especially perception, robotics, learning and pattern recognition.[256] A number of researchers began to look into "sub-symbolic" approaches.[257] Robotics researchers, such as Rodney Brooks, rejected "representation" in general and focussed directly on engineering machines that move and survive.[w] Judea Pearl, Lofti Zadeh and others developed methods that handled incomplete and uncertain information by making reasonable guesses rather than precise logic.[85][262] But the most important development was the revival of "connectionism", including neural network research, by Geoffrey Hinton and others.[263] In 1990, Yann LeCun successfully showed that convolutional neural networks can recognize handwritten digits, the first of many successful applications of neural networks.[264]
AI gradually restored its reputation in the late 1990s and early 21st century by exploiting formal mathematical methods and by finding specific solutions to specific problems. This "narrow" and "formal" focus allowed researchers to produce verifiable results and collaborate with other fields (such as statistics, economics and mathematics).[265] By 2000, solutions developed by AI researchers were being widely used, although in the 1990s they were rarely described as "artificial intelligence".[266]
Several academic researchers became concerned that AI was no longer pursuing the original goal of creating versatile, fully intelligent machines. Beginning around 2002, they founded the subfield of artificial general intelligence (or "AGI"), which had several well-funded institutions by the 2010s.[11]
Deep learning began to dominate industry benchmarks in 2012 and was adopted throughout the field.[8] For many specific tasks, other methods were abandoned.[x] Deep learning's success was based on both hardware improvements (faster computers,[268] graphics processing units, cloud computing[269]) and access to large amounts of data[270] (including curated datasets,[269] such as ImageNet).
Deep learning's success led to an enormous increase in interest and funding in AI.[y] The amount of machine learning research (measured by total publications) increased by 50% in the years 2015–2019,[234] and WIPO reported that AI was the most prolific emerging technology in terms of the number of patent applications and granted patents.[271] According to 'AI Impacts', about $50 billion annually was invested in "AI" around 2022 in the US alone and about 20% of new US Computer Science PhD graduates have specialized in "AI";[272] about 800,000 "AI"-related US job openings existed in 2022.[273] The large majority of the advances have occurred within the United States, with its companies, universities, and research labs leading artificial intelligence research.[10]
In 2016, issues of fairness and the misuse of technology were catapulted into center stage at machine learning conferences, publications vastly increased, funding became available, and many researchers re-focussed their careers on these issues. The alignment problem became a serious field of academic study. Thankyou!

Comments
Post a Comment