Wiki
We have compiled hundreds of related entries to help you understand "artificial intelligence"
Emergence in the field of artificial intelligence refers to a phenomenon in which complex collective behaviors or structures emerge through the interaction of simple individuals or rules. In artificial intelligence, this emergence can refer to high-level features or behaviors learned by the model that are not directly designed […]
Explainable AI (XAI) is a set of processes and methods that allow human users to understand and trust the results and outputs created by machine learning algorithms.
Conditional computation is a technique to reduce the total amount of computation by performing computation only when it is needed.
Statistical Classification is a supervised learning method used to classify new observations into one of the known categories.
Variational Autoencoder (VAE) is an artificial neural network structure proposed by Diederik P. Kingma and Max Welling, belonging to the probabilistic graphical model and variational Bayesian method.
Masked Language Modeling (MLM) is a deep learning technique widely used in natural language processing (NLP) tasks, especially in the training of Transformer models such as BERT, GPT-2, and RoBERTa.
Knowledge Engineering is a branch of Artificial Intelligence (AI) that develops rules and applies them to data to mimic the thought processes of a person with expertise on a particular subject.
Inception Score (IS) is an objective performance metric used to evaluate the quality of generated or synthetic images produced by a generative adversarial network (GAN).
Fuzzy Logic is a variable processing method that allows multiple possible truth values to be processed by the same variable. Fuzzy logic attempts to solve problems through an open, imprecise data spectrum and heuristic methods to obtain a series of accurate conclusions.
Fréchet Inception Distance (FID) is a performance metric where lower FID scores represent higher quality images generated by the generator and are similar to real images. FID is based on the feature vector of the image.
DALL-E is a new AI program developed by OpenAI that generates images based on text description prompts. It can combine language and visual processing, and this innovative approach opens up new possibilities in the creative field, communication, education and other fields. DALL-E was launched in January 2021 and is […]
LoRA (Low-Level Adaptation) is a breakthrough, efficient fine-tuning technique that harnesses the power of these state-of-the-art models for custom tasks and datasets without straining resources or prohibitively high costs.
CBR works by retrieving similar cases from the past and adapting them to the current situation to make a decision or solve a problem.
Adversarial Machine Learning is a machine learning method that aims to deceive machine learning models by providing deceptive inputs.
Cognitive Search represents the next generation of enterprise search, using artificial intelligence (AI) techniques to refine users' search queries and extract relevant information from multiple disparate data sets.
Code Quality describes the overall assessment of the effectiveness, reliability, and maintainability of a piece of software code. The main qualities of code quality include readability, clarity, reliability, security, and modularity. These qualities make the code easy to understand, change, operate, and debug.
Cloud containers are a technology for deploying, running, and managing applications in cloud environments. They provide a lightweight, portable way to encapsulate applications and their dependencies in an isolated runtime environment.
Model quantization can reduce the memory footprint and computational requirements of deep neural network models. Weight quantization is a common quantization technique that involves converting the weights and activations of a neural network from high-precision floating point numbers to a lower-precision format, such as 16-bit or 8-bit integers.
Triplet loss is a loss function for deep learning, which refers to minimizing the distance between the anchor point and the positive sample with the same identity, and minimizing the distance between the anchor point and the negative sample with different identities.
Large Language Model Operations (LLMOps) is the practice, techniques, and tools for the operational management of large language models in production environments. LLMOps is specifically about using tools and methods to manage and automate the lifecycle of LLMs, from fine-tuning to maintenance.
Data gravity refers to the ability of a body of data to attract applications, services, and other data. The quality and quantity of data will increase over time, thereby attracting more applications and services to connect to this data.
Gradient Accumulation is a mechanism for dividing a batch of samples used to train a neural network into several small batches of samples that are run sequentially.
Model validation is the process of evaluating the performance of a machine learning (ML) model on a dataset separate from the training dataset. It is an important step in the ML model development process because it helps ensure that the model generalizes to new, unseen data and does not overfit to the training data.
Pool-based sampling is a popular active learning method that selects informative examples for labeling. A pool of unlabeled data is created, and the model selects the most informative examples for manual annotation. These labeled examples are used to retrain the model, and the process is repeated.
Emergence in the field of artificial intelligence refers to a phenomenon in which complex collective behaviors or structures emerge through the interaction of simple individuals or rules. In artificial intelligence, this emergence can refer to high-level features or behaviors learned by the model that are not directly designed […]
Explainable AI (XAI) is a set of processes and methods that allow human users to understand and trust the results and outputs created by machine learning algorithms.
Conditional computation is a technique to reduce the total amount of computation by performing computation only when it is needed.
Statistical Classification is a supervised learning method used to classify new observations into one of the known categories.
Variational Autoencoder (VAE) is an artificial neural network structure proposed by Diederik P. Kingma and Max Welling, belonging to the probabilistic graphical model and variational Bayesian method.
Masked Language Modeling (MLM) is a deep learning technique widely used in natural language processing (NLP) tasks, especially in the training of Transformer models such as BERT, GPT-2, and RoBERTa.
Knowledge Engineering is a branch of Artificial Intelligence (AI) that develops rules and applies them to data to mimic the thought processes of a person with expertise on a particular subject.
Inception Score (IS) is an objective performance metric used to evaluate the quality of generated or synthetic images produced by a generative adversarial network (GAN).
Fuzzy Logic is a variable processing method that allows multiple possible truth values to be processed by the same variable. Fuzzy logic attempts to solve problems through an open, imprecise data spectrum and heuristic methods to obtain a series of accurate conclusions.
Fréchet Inception Distance (FID) is a performance metric where lower FID scores represent higher quality images generated by the generator and are similar to real images. FID is based on the feature vector of the image.
DALL-E is a new AI program developed by OpenAI that generates images based on text description prompts. It can combine language and visual processing, and this innovative approach opens up new possibilities in the creative field, communication, education and other fields. DALL-E was launched in January 2021 and is […]
LoRA (Low-Level Adaptation) is a breakthrough, efficient fine-tuning technique that harnesses the power of these state-of-the-art models for custom tasks and datasets without straining resources or prohibitively high costs.
CBR works by retrieving similar cases from the past and adapting them to the current situation to make a decision or solve a problem.
Adversarial Machine Learning is a machine learning method that aims to deceive machine learning models by providing deceptive inputs.
Cognitive Search represents the next generation of enterprise search, using artificial intelligence (AI) techniques to refine users' search queries and extract relevant information from multiple disparate data sets.
Code Quality describes the overall assessment of the effectiveness, reliability, and maintainability of a piece of software code. The main qualities of code quality include readability, clarity, reliability, security, and modularity. These qualities make the code easy to understand, change, operate, and debug.
Cloud containers are a technology for deploying, running, and managing applications in cloud environments. They provide a lightweight, portable way to encapsulate applications and their dependencies in an isolated runtime environment.
Model quantization can reduce the memory footprint and computational requirements of deep neural network models. Weight quantization is a common quantization technique that involves converting the weights and activations of a neural network from high-precision floating point numbers to a lower-precision format, such as 16-bit or 8-bit integers.
Triplet loss is a loss function for deep learning, which refers to minimizing the distance between the anchor point and the positive sample with the same identity, and minimizing the distance between the anchor point and the negative sample with different identities.
Large Language Model Operations (LLMOps) is the practice, techniques, and tools for the operational management of large language models in production environments. LLMOps is specifically about using tools and methods to manage and automate the lifecycle of LLMs, from fine-tuning to maintenance.
Data gravity refers to the ability of a body of data to attract applications, services, and other data. The quality and quantity of data will increase over time, thereby attracting more applications and services to connect to this data.
Gradient Accumulation is a mechanism for dividing a batch of samples used to train a neural network into several small batches of samples that are run sequentially.
Model validation is the process of evaluating the performance of a machine learning (ML) model on a dataset separate from the training dataset. It is an important step in the ML model development process because it helps ensure that the model generalizes to new, unseen data and does not overfit to the training data.
Pool-based sampling is a popular active learning method that selects informative examples for labeling. A pool of unlabeled data is created, and the model selects the most informative examples for manual annotation. These labeled examples are used to retrain the model, and the process is repeated.