What is Artificial Intelligence?

If you’ve ever wondered “what is artificial intelligence?” or googled “was Skynet AI?”, this article is here to answer your questions.

Reading Time 7 mins

Artificial Intelligence is seemingly everywhere these days. Recent innovations have peppered the technology throughout our lives, with applications in just about every industry and field. But, if you’ve found yourself wondering what exactly this new technology is, you’ve come to the right place. In this post, we’ll cover what AI is, where it comes from, and how it’s used. 

What is Artificial Intelligence?

Artificial Intelligence (AI) is the field of computer science that focuses on creating intelligent machines capable of performing tasks that typically require human intelligence. It involves developing algorithms and systems that can analyze vast amounts of data, recognize patterns, learn from experience, and make informed decisions or predictions.

There are several different terms often used to refer to AI, as the technology encompasses a wide range of technologies and techniques. Commonly used terms include machine learning, natural language processing, computer vision, and robotics. 

Machine learning, a subset of AI, enables machines to learn and improve from experience without being explicitly programmed.

Natural language processing allows machines to understand, interpret, and respond to human language. This is the foundation of applications like voice assistants and language translation.

Computer vision empowers machines to analyze and interpret visual data, facilitating tasks like object recognition and image classification.

Robotics combines AI with mechanical engineering, enabling the development of intelligent machines that can interact with the physical world.

Who invented Artificial Intelligence?

Alan Turing

Alan Turing was a pioneering figure in the field of computer science. He made several significant contributions that influenced the development of artificial intelligence (AI). Turing’s ideas and concepts laid the groundwork for AI research and continue to shape the field to this day. As a result, he is often considered its inventor.  

One of Turing’s most influential contributions was his proposal of the “Turing test” in 1950. The Turing test is a measure of a machine’s ability to exhibit intelligent behavior indistinguishable from that of a human. According to the test, if a machine can engage in natural language conversations and convince a human evaluator that it is human, then it can be considered artificially intelligent. 

The Turing test became a benchmark for AI researchers, encouraging the development of conversational agents and natural language processing capabilities.

Dartmouth College

AI also has its roots in the Dartmouth Conference held in 1956. It was at the conference when researchers first coined the term “artificial intelligence” and laid the foundation for the field.

Since its inception, AI has undergone significant evolutions, driven by advancements in computing power, data availability, and algorithmic improvements.

Late 20th Century AI

Late 20th century AI experienced an “AI winter” from the late 1970s to early 1990s. It was a period characterized by decreased interest and progress in AI. High expectations, limited computing power, and challenges in solving complex problems contributed to the decline. 

The AI winter ended due to advances in computing power, and practical applications demonstrating value, big data availability, improved algorithms, successful commercial products, and interdisciplinary collaborations. These factors renewed interest, leading to the resurgence of AI research and applications in the late 1990s and early 2000s.

Early 21st Century AI

The early 21st-century AI resurgence was driven by the availability of Big Data, which enabled improved algorithms, pattern recognition, real-world applications, and iterative improvement of AI models.

  • Data Availability The emergence of Big Data provided AI researchers with access to vast and diverse datasets for training and validation.
  • Enhanced Algorithms: AI algorithms, particularly in machine learning, are improved by leveraging large datasets, leading to better performance and accuracy.
  • Pattern Recognition: Big Data allowed for the identification of complex patterns and correlations that were previously difficult to uncover.
  • Real-World Applications: Industries leveraged Big Data and AI to gain insights, make better decisions, and improve operational efficiency.
  • Iterative Improvement: The feedback loop created by Big Data enabled iterative improvement of AI models through continuous learning from real-world data.

What Does AI Do?

The purpose of AI is to automate tasks, enhance decision-making, improve efficiency and productivity, enable personalization, augment human capabilities, and drive innovation and research.

Automate Tasks

AI automates routine and repetitive tasks, freeing up human resources and allowing them to focus on more complex and creative endeavors.

Enhance Decision-Making

AI helps in making informed and data-driven decisions by analyzing large volumes of information, identifying patterns, and providing valuable insights to support decision-making processes.

Improve Efficiency and Productivity

AI technologies optimize processes, streamline operations, and increase efficiency, leading to improved productivity across various industries and sectors.

Enable Personalization

AI enables personalized experiences by analyzing user preferences, behavior, and data, allowing businesses to tailor products, services, and recommendations to individual needs and preferences.

Augment Human Capabilities

AI complements human abilities by enhancing their cognitive and physical capabilities, enabling humans to perform tasks faster, with higher accuracy, and with reduced effort.

Advance Innovation and Research

AI fuels innovation by enabling breakthroughs in various fields, driving advancements in healthcare, science, engineering, and other disciplines, leading to new discoveries and solutions.

Different Types of Artificial Intelligence

AI systems are categorized based on how generalizable or specific they are (Narrow vs General) or by the way they make decisions (rule-based versus machine learning).

Narrow AI versus General AI

There are different types of AI, ranging from narrow or weak AI to general or strong AI.

Narrow/Weak AI refers to systems designed to perform specific tasks, such as facial recognition or voice assistants, and operates within predefined boundaries.

General/Strong AI aims to replicate human-level intelligence, possessing the ability to understand, learn, and apply knowledge across various domains.

While narrow AI is prevalent today, achieving general AI remains an ongoing challenge, and its development raises ethical and societal considerations.

Rule-Based versus Machine Learning

Rule-based AI, also known as expert systems, relies on predefined rules created by human experts to make decisions or solve problems. These rules are encoded into the AI system, and the system matches input data against these rules to determine the appropriate output or action.

Benefits:

  • Suited for well-defined domains with known and explicitly defined rules

Limitations:

  • It may struggle with handling ambiguity or learning from new data
  • Requires human expertise to create and maintain the rules

Machine Learning AI, in contrast, learns from data without explicit rules, using algorithms that analyze patterns and create mathematical models.

Benefits:

  • Adapts internal parameters to optimize performance and makes predictions or decisions based on new, unseen data
  • Excels in complex domains with large amounts of data, discovering intricate patterns and generalizing from examples
  • Adapts and improves performance over time as new data becomes available
  • Relies on training data and algorithms to learn autonomously

Limitations:

  • Need for large amounts of data
  • Overfitting/an inability to generalize
  • Potential to duplicate biases present in data

Both approaches have their strengths and limitations, and the choice between them depends on the specific problem domain and the availability of labeled data and expert knowledge.

Oftentimes, both approaches are used at different stages in the life cycle of an AI project.

AI Uses In Industry

Self-Driving Cars (Tesla)

Tesla’s self-driving cars utilize a combination of AI techniques, including machine learning and expert systems.

Machine learning algorithms analyze vast amounts of data from cameras, radar, and other sensors to recognize and interpret the surrounding environment. Expert systems encode rules and decision-making processes, allowing the car to make real-time decisions based on input from sensors and the learned models.

Large Language Models (Chat GPT)

Large language models, like those Chat GPT, primarily rely on unsupervised machine learning techniques, particularly large language models. Engineers train these systems on large datasets of text, enabling them to learn patterns, language structures, and context.

By leveraging deep learning algorithms, the models generate coherent and contextually relevant prompts or responses based on the input they receive.

Editing and Proofreading (Grammarly)

Grammarly uses a combination of expert systems and machine learning approaches to provide editing and proofreading suggestions. Expert systems encode grammar rules, style guidelines, and best practices.

Machine learning algorithms analyze text patterns and linguistic features to detect errors, suggest corrections, and provide contextual recommendations.

Learn To Wield The Power Of AI

While Artificial Intelligence is currently exploding in popularity, it is still considered to be a new field. The rules are still being written, and the first to move often takes an advantage over those late to adapt. At Flatiron School, we’re teaching the skills to help you adapt to the AI revolution.

For enterprise clients, we’ve released entirely new AI training programs. If your organization wants to use AI to work smarter, move faster, and be prepared to innovate with the latest technology, Flatiron School’s suite of AI training programs is just what you’re looking for. Explore our AI training programs today. 

For students, each of our programs has been enhanced with AI. We teach our students how to use the power of AI to accelerate their output and results in Software Engineering, Data Science, Cybersecurity, and Product Design and be ready to adapt to the next innovation coming down the pipe. 

About Christine Egan

Christine is a Python Developer and Natural Language Processing Engineer, as well as a Senior Data Science Curriculum Developer at Flatiron School. She holds a Bachelor of Arts in Linguistics and Philosophy from Stony Brook University and is also an alum of the Flatiron School Data Science Bootcamp. Before joining Flatiron School’s curriculum team, Christine worked as a consultant for various federal agencies. When not working on Python code, you might find her writing data science articles for Medium, or playing Stardew Valley. 

Disclaimer: The information in this blog is current as of June 17, 2023. Current policies, offerings, procedures, and programs may differ.

About Christine Egan

More articles by Christine Egan