Artificial Intelligence (AI) is a complex field that employs specialised terminology, making it challenging for the general public to grasp its nuances. To aid understanding, we’ve created a glossary that outlines key terms frequently used within the AI industry. This resource will be updated regularly to reflect ongoing advancements in AI research and emerging concerns regarding safety.
One prominent concept in AI is Artificial General Intelligence (AGI), which denotes systems that can outperform humans in most tasks. Definitions vary, with some experts seeing AGI as a collaborator akin to a capable co-worker, while others define it more technically as highly autonomous systems excelling at economically valuable jobs.
Another term, AI agent, refers to advanced tools that utilise AI to autonomously complete tasks like booking appointments or managing code. These agents employ multiple AI systems to perform complex actions, although the technology is still evolving.
Chain of thought reasoning in large language models involves the breakdown of problems into manageable steps to enhance accuracy. This method, inspired by human reasoning, can result in more reliable outcomes, particularly in logic-based scenarios.
The term compute denotes the computational power required for AI models to function effectively, including various hardware like GPUs and CPUs that support the infrastructure of AI development.
Deep learning represents a subset of machine learning characterised by its multi-layered neural networks, allowing these models to detect intricate patterns independently. However, this complexity necessitates vast amounts of data and extended training periods, leading to higher development costs.
Diffusion techniques, integral to generative AI, simulate the process of data deterioration through added noise, aiming to learn how to reconstruct data from this degraded state.
Distillation involves extracting knowledge from a large AI model (the teacher) to train a smaller model (the student), optimising performance while maintaining lower resource requirements. This method exemplifies innovation in AI scaling, demonstrated by OpenAI’s GPT-4 Turbo model.
Fine-tuning refers to refining an AI model for specific tasks by introducing new, targeted data after its initial training phase, while Generative Adversarial Networks (GANs) employ competing neural networks to enhance data generation quality.
Hallucination describes instances when an AI generates incorrect information, posing risks in contexts like health advice. This highlights the challenges inherent in ensuring high-quality, reliable AI outputs.
Inference is the critical process where AI models make predictions based on previously learned data, and is supported by various hardware configurations.
Tokens are crucial for human-AI communication, acting as basic units of processed data within Language Models (LLMs) that underpin many AI systems. The economics of AI often involve charging based on token usage.
Transfer learning allows an AI model to leverage previously acquired knowledge to foster efficiency in new model training.
Finally, weights are fundamental in determining input significance during the training process, affecting how models generate outcomes based on their learning.
This glossary aims to demystify important AI concepts as the field continues to evolve and expand.
Fanpage: TechArena.au
Watch more about AI – Artificial Intelligence

