Artificial intelligence, or AI, is a combination of sophisticated algorithms, computing, and data training methods that allow machines and computers to mimic human knowledge and behaviors.
In some ways, artificial intelligence is the opposite of natural intelligence. While living creatures are born with natural intelligence, man-made machines can be said to possess artificial intelligence. In fact, one of the early pioneers of AI, John McCarthy, defined artificial intelligence as “the science and engineering of making intelligent machines.”
In practice, however, computer scientists use the term artificial intelligence to refer to machines doing the kind of thinking and tasks that humans have taken to a very high level.
Also see: Top Generative AI Apps and Tools
Artificial Intelligence: Table of Contents
- What is Artificial Intelligence in Simple Terms?
- What is Generative AI?
- Artificial Intelligence vs. Machine Learning
- Machine Learning vs. Deep Learning
- Types of Artificial Intelligence
- Short History of AI
- AI Uses Cases: What Can AI Do?
- Pros and Cons of Using AI
- The Limitations of AI
- Ethics and AI
- Bottom Line: The Future of AI
- Artificial Intelligence: Additional Resources
What Is Artificial Intelligence in Simple Terms?
Computers are very good at making calculations — taking inputs, manipulating them, and generating outputs as a result. But in the past, they have not been able to do other kinds of human tasks, such as understanding and generating language, identifying objects by sight, creating art, or learning from past experiences.
Today, many computer systems have the ability to communicate with humans using ordinary speech. They can recognize faces and other objects. They use machine learning techniques, especially deep learning and neural networks, in ways that allow them to learn from the past and make predictions about the future.
Much of this technology is still being developed and advanced every day, but now, even the average consumer can access AI models to generate content, solve problems, and handle a number of other advanced tasks.
Also see: Generative AI Companies: Top 12 Leaders
What Is Generative AI?
Generative AI is a specific, emerging form of artificial intelligence that relies on big data training sets, neural networks, deep learning, and some natural language processing to create original content outputs. Although the most commonly used generative AI tools currently generate text and code, generative AI solutions can also generate images, audio, and synthetic data, among other outputs.
Generative AI is perhaps the most popular and fastest-growing type of AI today, especially with the global popularity of OpenAI’s ChatGPT and GPT-4. Other popular examples of generative AI include Google Bard, Jasper, Stable Diffusion, DALL-E, Microsoft and GitHub Copilot, and DreamStudio.
Learn more: What is Generative AI?
Artificial Intelligence vs. Machine Learning
At the simplest level, machine learning (ML) is a subset of artificial intelligence. While the greater artificial intelligence umbrella is dedicated to all kinds of approaches to human-like problem-solving, machine learning involves developing a specifically trained model that focuses on teaching machines to complete focused tasks and identify data patterns. In many cases, machine learning is used in conjunction with other forms of artificial intelligence.
Further the comparison: AI vs. ML: Artificial Intelligence and Machine Learning Overview
Machine Learning vs. Deep Learning
Just as machine learning is a subset of artificial intelligence, deep learning is a subset of machine learning. Machine learning as a whole is about deriving insights from big datasets and making decisions based on the information these solutions find. It is an algorithmic, data-driven approach to decision-making. Deep learning is also an algorithmic approach to decision-making, but it’s a bit more complex; instead of working with one or a small number of algorithms, deep learning models work with multiple layers of algorithms — known as a neural network. This structure is designed to help deep learning models mimic the functions of human brains.
Get a closer look: Machine Learning vs. Deep Learning
Types of Artificial Intelligence
Computer scientists have proposed different ways to classify the types of AI. One popular classification uses three categories:
1. Artificial Narrow Intelligence
Artificial Narrow Intelligence (ANI) is designed to complete one task or set of tasks with high competence and skill. Apple’s Siri, IBM’s Watson, and Google’s AlphaGo are all examples of Narrow AI. Narrow AI is fairly common in the world today.
2. Artificial General Intelligence
Artificial General Intelligence (AGI) is a form of AI that performs many intellectual tasks on par with a human. Many researchers are currently working on developing general AI. One of the best early examples of AGI is GPT-4, which is able to solve a variety of problems and has performed well on a number of standardized human tests.
3. Artificial Superintelligence
Artificial Superintelligence (ASI), which is still theoretical, has intellectual capacities that far outstrip those of humans. This kind of artificial intelligence is not yet close to becoming a reality.
Also see: How AI is Altering Software Development with AI-Augmentation
Another popular classification uses four different categories
1. Reactive Machines
Reactive machines take an input and deliver an output, but they do not have memory or learn from past experiences. The bots you can play against in many video games are good examples of reactive machines.
2. Limited Memory
Limited memory machines can look a short distance back into the past. Many vehicles on the road today have advanced safety features that fall into this category. For example, if your car issues a backup warning when a vehicle or person is about to pass behind your car, it is using a limited set of historical data to come to conclusions and deliver outputs.
3. Theory of Mind
Theory of mind machines are aware that human beings and other entities exist and have their own independent motivations. Most researchers agree that this kind of AI has not yet been developed, and some researchers say that we should not attempt to do so. However, some of the latest generative AI models are performing well in theory of mind tasks and tests.
Self-aware machines are aware of their own existence and identities. Although a few researchers claim that self-aware AI exists today, only a handful of people share this opinion. Developing self-aware AI is highly controversial.
While these classifications are interesting from a theoretical standpoint, most organizations are far more interested in what they can do with AI.
Also see: Three Ways to Get Started with AI
A Short History of Artificial Intelligence
1950s and 1960s
Many people trace the history of artificial intelligence back to 1950 when Alan Turing published “Computing Machinery and Intelligence.” Turing’s essay begins with “I propose to consider the question, ‘Can machines think?’” It then lays out a scenario that came to be known as the Turing Test. Turing proposed that a computer could be considered intelligent if a person could not distinguish the machine from a human being.
In 1956, John McCarthy and Marvin Minsky hosted the first artificial intelligence conference, the Dartmouth Summer Research Project on Artificial Intelligence (DSRPAI). The conference convinced computer scientists that artificial intelligence was an achievable goal, setting the foundation for several decades of further research. Some of the earliest forays into AI technology, developed bots to play checkers and chess, emerged as a result of this conference.
The 1960s saw the development of rudimentary robots and several problem-solving programs. One notable highlight was the creation of ELIZA, a program that simulated psychotherapy and provided an early example of human-machine communication.
1970s and 1980s
In the 1970s and 80s, AI development continued but at a slower pace. The field of robotics in particular saw significant advances, such as robots that could see and walk. Additionally, Mercedes-Benz introduced the first (extremely limited) autonomous vehicle. However, government funding for AI research decreased dramatically, leading to a period some refer to as the “AI winter.”
1990s and Early 2000s
Interest in AI surged again in the 1990s. The Artificial Linguistic Internet Computer Entity (ALICE) chatbot demonstrated that natural language processing could lead to human-computer communication that felt more natural than what had been possible with ELIZA. The decade also saw a surge in analytic techniques that would form the basis of later AI development, as well as the development of the first recurrent neural network architecture. This was also the decade when IBM rolled out its Deep Blue chess AI, the first to win against the current world champion.
The first decade of the 2000s saw rapid innovation in robotics. Roombas began vacuuming rugs and robots launched by NASA explored Mars. Closer to home, Google was working on a driverless car.
The years since 2010 have been marked by unprecedented increases in AI technology. Both hardware and software developed to a point where object recognition, natural language processing, and voice assistants became possible. IBM’s Watson won Jeopardy. Siri, Alexa, and Cortana came into being, and chatbots became a fixture of modern retail. Google DeepMind’s AlphaGo beat human Go champions. And enterprises in all industries have begun deploying AI tools to help them analyze their data and become more successful.
Perhaps most significant to today’s generative AI landscape, in 2017, Google released a research paper that first identified a neural network architecture concept called the Transformer. The transformer has since become one of the foundational technologies for developing generative AI models.
AI is beginning to evolve past narrow and limited functions into more advanced implementations, some of which are accessible to the general public; indeed, this decade seems to place more focus on AI democratization than ever before. The early years of this decade have seen the rise of generative AI, with more complex models created for enterprise users and simplified, low-cost versions available to all users.
The greatest and most popular advancements of AI took off in late 2022 when OpenAI first launched its ChatGPT chatbot and large language model (LLM). Many competitors and similar models have since emerged to support text, code, audio, video, image, and synthetic data generation requirements.
Beyond content generation in its various forms, AI advancements of the 2020s include AI-powered search and virtual assistants in web browsers and various business applications, AI-powered medical and pharmaceutical research, and more advanced instances of AI-powered computer vision for AR, VR, and XR experiences. Increasingly, AI is being regulated and its ethics and environmental impact are being discussed.
Also see: The History of Artificial Intelligence
AI Use Cases: What Can AI Do?
The possible AI use cases and applications for artificial intelligence are nearly limitless. Some of today’s most common AI use cases include the following:
Generative AI models are being used to generate content in a variety of formats: not just text but also code, synthetic data, audio and music, images, video, and voice. Content generation models are currently applied to a variety of industries and use cases, including marketing and sales, customer service, employee coaching, cybersecurity, computer vision, healthcare and pharmaceuticals, entertainment and gaming, and legal and government.
More on this topic: Generative AI Examples
Whether you’re shopping for a new sweater, looking for a movie to watch, scrolling through social media, or trying to find true love, you’re likely to encounter an AI-based algorithm that makes suggestions. Most recommendation engines use machine learning models to compare your characteristics and historical behavior to people around you. The models can be very good at identifying preferences even when users aren’t aware of those preferences themselves.
Natural language processing
Natural language processing (NLP) is a broad category of AI that encompasses speech-to-text, text-to-speech, keyword identification, information extraction, translation, and language generation. It allows humans and computers to interact through ordinary human language (audio or typed), rather than through programming languages. Because many NLP tools incorporate machine learning capabilities, they tend to improve over time.
AI can not only understand human language, but it can also identify the emotions underpinning human conversation. For example, AI can analyze thousands of tech support conversations or social media interactions and identify which customers are experiencing strong positive or negative emotions. This type of analysis allows customer support teams to focus on customers that might be at risk of defecting and/or extremely enthusiastic supporters who could become advocates for the brand.
Voice synthesis and assistance
Many of us interact with Siri, Alexa, Cortana, or Google on a daily basis. While we often take these assistants for granted, they incorporate advanced AI techniques, including natural language processing and machine learning. Several new generative AI solutions offer voice synthesis and assistance as well.
Financial services companies and retailers often use highly advanced machine learning techniques to identify fraudulent transactions. They look for patterns in financial data, and when a transaction looks abnormal or fits a known pattern of fraud, they issue alerts that can stop or mitigate criminal activity.
Many of us use AI-based facial recognition to unlock our phones. This kind of AI also enables autonomous vehicles and automates processing for many health-related scans and tests.
Many industries like manufacturing, oil and gas, transportation, and energy rely heavily on machinery, and when that machinery experiences downtime, it can be extremely costly. Firms are now using a combination of object recognition and machine learning techniques to identify in advance when equipment is likely to break down so they can schedule maintenance at a time that minimizes disruptions.
Predictive and prescriptive analytics
Predictive algorithms can analyze just about any kind of business data and use that as the basis for forecasting likely future events. Prescriptive analytics, which is still in its infancy, goes a step further and not only makes a forecast but also offers recommendations as to what organizations should do to prepare for likely future events. These AI-powered approaches to analytics are used across a variety of industries but are particularly gaining steam in quote-based industries like insurance.
Most vehicles in production today have some autonomous features, such as parking assistance, lane centering, and adaptive cruise. And while they are still expensive and relatively rare, fully autonomous vehicles are already on the road, and the AI technology that powers them is getting better and less expensive every day.
Industrial robots were one of the earliest implementations of AI, and they continue to be an important part of the AI market. Consumer robots, such as robot vacuum cleaners, bartenders, and lawnmowers, are becoming increasingly commonplace.
Of course, these are just some of the more widely known use cases for AI. AI technology is seeping into daily life in so many ways that we often aren’t fully aware of.
AIOps — artificial intelligence for IT operations — is increasingly being used to simplify workflows and workloads for skilled tech workers. AI can be used to complete tasks related to service and performance management and data management and analysis.
Also see: Best Machine Learning Platforms
Pros and Cons of Using AI
AI is permeating every corner of the business and home, but, much like with any other new and fast-changing technology, artificial intelligence presents both considerable pros and cons.
- High levels of accuracy: Simplifies workflows and reduces the potential for human error.
- Increases employee availability: Takes repetitive task loads off human workers, allowing them to focus on more meaningful tasks.
- Advanced content generation capabilities: Fast and affordable content generation and data analysis are possible with many AI models.
- New research and discoveries powered by AI-driven analysis: AI developments are leading to sophisticated analysis advancements in fields like medicine and pharmaceuticals.
- Often expensive and power-hungry: High costs and energy requirements are often part of running AI models.
- User privacy and security shortcomings: User privacy and security concerns are paramount, especially with how much data is required to train AI models; with the latest generative AI models, there’s also concern that the models will learn from, retain, and share user inputs without authorization.
- Workforce disruption: Advanced AI tools could potentially take jobs away from human workers.
- No sense of self: AI has no self-awareness or self-driven creativity; everything is programmed and can lead to bias or inappropriate/dangerous outputs.
On a related topic: The AI Market: An Overview
The Limitations of AI
AI is limited both by the data it’s trained with and the environment in which it’s operating. Here are a few examples of the limitations of AI:
- AI bias: If training data is not robust, accurate, and varied, the model can suffer from inaccurate or partial outputs. AI bias is a major concern, as machines that are trained on biased data may not serve the needs of more diverse populations and use cases.
- No emotions or creativity: AI continues to grow in its conversational and creative capabilities, but it is still an algorithmic model that does not “think” but rather operates on its training. As such, AI solutions cannot fully replace the emotional intelligence and sparks of creativity that humans have.
- Limited recall and contextual understanding: Although some of the latest generative AI models and other AI models can pull from their recent history, many AI tools can only handle inputs without considering any additional context while generating outputs.
- Limited timeliness: Not all AI models have real-time access to the internet and other resources with updated information. They may generate inaccurate responses or fail to learn from recent mistakes as a result.
- Compute power requirements: Most AI models require significant compute power, which can be expensive and energy-intensive to implement. That is why many AI models are limited to large enterprises that have the resources to run this kind of AI.
The Importance of AI Ethics
Ai ethics have been discussed in a more theoretical sense for many years, but especially as AI has become more mainstream and capable, AI ethics discussions have become more important than ever before. AI ethics is incredibly important to the long-term health and development of AI because ethical issues with AI can cause businesses to lose customers, reputation, legal battles, and money. In some cases, unethical AI instances could even lead to the loss of human life.
With a strong ethical AI framework in place, AI companies and users can expect the following benefits:
- Avoids harmful biases: AI ethics focuses heavily on creating tools that work well for everyone, including users across the globe and of varying races, genders, cultural backgrounds, and disabilities.
- Protects user privacy: AI requires massive amounts of data to run successfully, and sometimes, that data encroaches upon personal privacy. AI ethics train models to more carefully handle user inputs, but also their payment information, their image/identity, etc.
- Encourages responsible environmental impact: Many AI models use a lot of energy, which is already having negative consequences on the environment. Some of the foremost AI companies in the world are working to incorporate responsible energy consumption and other environmental considerations into their AI ethics.
- Increases human safety features: AI ethics encourage AI developers to create tools that put user and human safety at the forefront; autonomous vehicles, for example, must be tested and vetted thoroughly before they can be operated by humans or driven near pedestrians.
More on this topic: AI Ethics: An Overview
The Future of AI
So what does the future of AI look like? Clearly, AI is already reshaping consumer and business markets, but it has a ways to go before it truly matches human knowledge and capabilities.
The technology that powers AI continues to progress at a steady rate. Future advances like quantum computing may eventually enable major new innovations, but in the near term, it seems likely that the technology itself will continue along a predictable path of constant improvement.
What’s less clear is how humans will adapt to AI. Many early AI implementations have run into major challenges. In some cases, the data used to train models has allowed bias to infect AI systems, rendering them unusable.
In many other cases, businesses have not always seen the financial results they hoped for after deploying AI. The technology may be mature, but the business processes surrounding it are not. Therefore, enterprise AI’s future will rely heavily upon the investments businesses make in the technology.
“Successful AI business outcomes will depend on the careful selection of use cases,” said Alys Woodward, senior director analyst at Gartner. “Use cases that deliver significant business value, yet can be scaled to reduce risk, are critical to demonstrate the impact of AI investment to business stakeholders.”
Finally, and perhaps most significantly, there have been mixed reactions from the general public when it comes to artificial intelligence developments. While many users are excited about new AI tools like generative AI models, others are nervous about losing their jobs or their personal information to the technology. Others are concerned about the future implications of tools that are only growing “smarter” and more capable.
High levels of adoption have propelled certain forms of AI forward, while others have languished in obscurity. In a very real sense, the future of AI may be more about people than about machines.
Also see: The Future of Artificial Intelligence
Artificial Intelligence: Additional Resources
In an AI market that’s constantly changing, it can be difficult to keep up with the latest news and trends. We’ve gathered several of our top resources in one place to help you stay current on artificial intelligence:
- The AI Market: An Overview
- Cloud and AI Combined: Revolutionizing Tech
- Guide to Algorithms in AI
- Generative AI Startups in 2023
- Generative AI Companies: Top 12 Leaders
- Top 9 Generative AI Applications and Tools
- Generative AI Landscape: Current and Future Trends
- Generative AI and Cybersecurity: Advantages and Challenges
- The Benefits of Generative AI
- AI Courses: Learn AI With These Top 10 Courses
- Top 10 AI Detector Tools for 2023
- AI Careers: How to Build a Career in AI
- ChatGPT vs. Watson Assistant
- ChatGPT vs. Google Bard: Generative AI Comparison