Wiki
We have compiled hundreds of related entries to help you understand "artificial intelligence"
The curse of sparsity is a key scientific issue in the field of autonomous driving. It refers to the fact that in real driving environments, the probability of safety-critical events is extremely low, which causes these events to be extremely sparse in driving data, making it difficult for deep learning models to learn the characteristics of these events.
Diffusion loss is a loss function related to the diffusion model, which is used during the training process to guide the model to learn how to gradually remove noise and restore the original structure of the data.
The Long-Tail Challenge generally refers to a class of problems encountered in machine learning and deep learning, especially when dealing with visual recognition tasks.
Crapness Ratio is a metric used to evaluate the proportion of nonsense or invalid information in the answers given by large language models (LLMs).
In the field of artificial intelligence, lifelong learning refers to the ability of a machine to continuously update and improve its knowledge base and models by continuously receiving new data and experience.
Hardware independence refers to software, applications, operating systems, or other types of systems that are designed not to be dependent on or specific to any particular hardware platform or hardware architecture.
LlamaIndex is a tool for building indexes and querying local documents, which acts as a bridge between custom data and Large Language Models (LLMs).
The modality generator is a key component in a multimodal learning system, and its main role is to generate outputs of different modalities, such as images, videos, or audios.
The Visual Language Geographic Foundation Model is an artificial intelligence model specifically designed to process and analyze Earth observation data.
Future Multi-Predictor Mixture is a model component for time series forecasting that is part of the TimeMixer architecture.
PDM is a theoretical concept for time series forecasting and it is one of the core components of the TimeMixer model.
MRL learns information with different granularities by optimizing nested low-dimensional vectors and allows a single embedding to adapt to the computational constraints of downstream tasks.
Hadoop is an open source framework developed by the Apache Software Foundation for storing and processing large amounts of data on clusters of commodity hardware.
Edge AI refers to the deployment of AI algorithms and AI models directly on local edge devices such as sensors or Internet of Things (IoT) devices, enabling real-time data processing and analysis without constant reliance on cloud infrastructure. Simply put, edge AI refers to the integration of edge computing and human […]
An open source project, product, or initiative embraces and promotes the principles of open communication, collaborative participation, rapid prototyping, transparency, meritocracy, and community-oriented development.
Neuromorphic computing is the process by which computers are designed and built to mimic the structure and function of the human brain, with the aim of using artificial neurons and synapses to process information in this way.
Function calling is a basic concept in programming, which means to perform a specific task by calling a defined function during program execution.
Spiking Neural Network (SNN), at the intersection of neuroscience and artificial intelligence, is a neural network model that simulates the behavior of biological neurons in the brain.
Finite Element Model (FEM) is a numerical calculation method that approximates the physical behavior of an entity by discretizing a continuous physical structure into a finite number of small parts, namely "elements". These elements can be one-dimensional line elements, two-dimensional surface elements, or three-dimensional volume elements.
Contextual position encoding is a new type of position encoding method that allows the position information to vary according to context conditions.
Learning With Errors (LWE) is a very important problem in cryptography and theoretical computer science, proposed by Oded Regev in 2005. The LWE problem can be described as: given a system of linear equations, where each […]
In mathematics, low-rank approximation is a minimization problem where the cost function measures the goodness of fit between a given matrix (the data) and an approximation matrix (the optimization variables), but the rank of the approximation matrix must be reduced.
Knowledge distillation is a machine learning technique that aims to transfer the learnings of a large pre-trained model (the “teacher model”) to a smaller “student model”.
YOLOv10 achieves state-of-the-art performance while significantly reducing computational overhead
The curse of sparsity is a key scientific issue in the field of autonomous driving. It refers to the fact that in real driving environments, the probability of safety-critical events is extremely low, which causes these events to be extremely sparse in driving data, making it difficult for deep learning models to learn the characteristics of these events.
Diffusion loss is a loss function related to the diffusion model, which is used during the training process to guide the model to learn how to gradually remove noise and restore the original structure of the data.
The Long-Tail Challenge generally refers to a class of problems encountered in machine learning and deep learning, especially when dealing with visual recognition tasks.
Crapness Ratio is a metric used to evaluate the proportion of nonsense or invalid information in the answers given by large language models (LLMs).
In the field of artificial intelligence, lifelong learning refers to the ability of a machine to continuously update and improve its knowledge base and models by continuously receiving new data and experience.
Hardware independence refers to software, applications, operating systems, or other types of systems that are designed not to be dependent on or specific to any particular hardware platform or hardware architecture.
LlamaIndex is a tool for building indexes and querying local documents, which acts as a bridge between custom data and Large Language Models (LLMs).
The modality generator is a key component in a multimodal learning system, and its main role is to generate outputs of different modalities, such as images, videos, or audios.
The Visual Language Geographic Foundation Model is an artificial intelligence model specifically designed to process and analyze Earth observation data.
Future Multi-Predictor Mixture is a model component for time series forecasting that is part of the TimeMixer architecture.
PDM is a theoretical concept for time series forecasting and it is one of the core components of the TimeMixer model.
MRL learns information with different granularities by optimizing nested low-dimensional vectors and allows a single embedding to adapt to the computational constraints of downstream tasks.
Hadoop is an open source framework developed by the Apache Software Foundation for storing and processing large amounts of data on clusters of commodity hardware.
Edge AI refers to the deployment of AI algorithms and AI models directly on local edge devices such as sensors or Internet of Things (IoT) devices, enabling real-time data processing and analysis without constant reliance on cloud infrastructure. Simply put, edge AI refers to the integration of edge computing and human […]
An open source project, product, or initiative embraces and promotes the principles of open communication, collaborative participation, rapid prototyping, transparency, meritocracy, and community-oriented development.
Neuromorphic computing is the process by which computers are designed and built to mimic the structure and function of the human brain, with the aim of using artificial neurons and synapses to process information in this way.
Function calling is a basic concept in programming, which means to perform a specific task by calling a defined function during program execution.
Spiking Neural Network (SNN), at the intersection of neuroscience and artificial intelligence, is a neural network model that simulates the behavior of biological neurons in the brain.
Finite Element Model (FEM) is a numerical calculation method that approximates the physical behavior of an entity by discretizing a continuous physical structure into a finite number of small parts, namely "elements". These elements can be one-dimensional line elements, two-dimensional surface elements, or three-dimensional volume elements.
Contextual position encoding is a new type of position encoding method that allows the position information to vary according to context conditions.
Learning With Errors (LWE) is a very important problem in cryptography and theoretical computer science, proposed by Oded Regev in 2005. The LWE problem can be described as: given a system of linear equations, where each […]
In mathematics, low-rank approximation is a minimization problem where the cost function measures the goodness of fit between a given matrix (the data) and an approximation matrix (the optimization variables), but the rank of the approximation matrix must be reduced.
Knowledge distillation is a machine learning technique that aims to transfer the learnings of a large pre-trained model (the “teacher model”) to a smaller “student model”.
YOLOv10 achieves state-of-the-art performance while significantly reducing computational overhead