Newyorkcliche

Overview

  • Founded Date April 18, 1905
  • Sectors IT
  • Posted Jobs 0
  • Viewed 13

Company Description

What Is Artificial Intelligence (AI)?

The idea of “a device that thinks” go back to ancient Greece. But considering that the advent of electronic computing (and relative to a few of the subjects discussed in this post) crucial occasions and turning points in the development of AI consist of the following:

1950.
Alan Turing publishes Computing Machinery and Intelligence. In this paper, Turing-famous for breaking the German ENIGMA code throughout WWII and frequently referred to as the “dad of computer technology”- asks the following concern: “Can devices think?”

From there, he provides a test, now notoriously called the “Turing Test,” where a human interrogator would attempt to differentiate between a computer and human text response. While this test has actually undergone much scrutiny considering that it was released, it stays a vital part of the history of AI, and an ongoing idea within philosophy as it uses concepts around linguistics.

1956.
John McCarthy coins the term “expert system” at the first-ever AI conference at Dartmouth College. (McCarthy went on to develop the Lisp language.) Later that year, Allen Newell, J.C. Shaw and Herbert Simon develop the Logic Theorist, the first-ever running AI computer program.

1967.
Frank Rosenblatt builds the Mark 1 Perceptron, the very first computer system based on a neural network that “found out” through trial and error. Just a year later on, Marvin Minsky and Seymour Papert release a book entitled Perceptrons, which ends up being both the landmark work on neural networks and, at least for a while, an argument against future neural network research efforts.

1980.
Neural networks, which utilize a backpropagation algorithm to train itself, became widely used in AI applications.

1995.
Stuart Russell and Peter Norvig publish Expert system: A Modern Approach, which becomes one of the leading textbooks in the study of AI. In it, they look into 4 possible goals or definitions of AI, which differentiates computer system systems based upon rationality and thinking versus acting.

1997.
IBM’s Deep Blue beats then world chess champion Garry Kasparov, in a chess match (and rematch).

2004.
John McCarthy composes a paper, What Is Expert system?, and proposes an often-cited meaning of AI. By this time, the period of big information and cloud computing is underway, making it possible for companies to handle ever-larger data estates, which will one day be utilized to train AI models.

2011.
IBM Watson ® Ken Jennings and Brad Rutter at Jeopardy! Also, around this time, data science begins to emerge as a popular discipline.

2015.
Baidu’s Minwa supercomputer uses an unique deep neural network called a convolutional neural network to recognize and categorize images with a greater rate of accuracy than the typical human.

2016.
DeepMind’s AlphaGo program, powered by a deep neural network, beats Lee Sodol, the world champion Go gamer, in a five-game match. The victory is substantial given the big variety of possible relocations as the game advances (over 14.5 trillion after simply four moves). Later, Google acquired DeepMind for a reported USD 400 million.

2022.
A rise in large language designs or LLMs, such as OpenAI’s ChatGPT, creates a huge change in performance of AI and its prospective to drive enterprise worth. With these brand-new generative AI practices, deep-learning designs can be pretrained on large amounts of data.

2024.
The newest AI trends indicate a continuing AI renaissance. Multimodal models that can take numerous kinds of data as input are providing richer, more robust experiences. These models combine computer system vision image recognition and NLP speech recognition abilities. Smaller designs are likewise making strides in an age of lessening returns with huge models with big criterion counts.