Wiki
We have compiled hundreds of related entries to help you understand "artificial intelligence"
The NSG statistic quantifies the ratio of spatial probability gradient to temporal density change.
Mem-I has achieved significant improvements over existing memory-enhanced agent baselines in multiple benchmark tests.
SSP demonstrates the potential of self-game theory as a scalable and data-efficient training paradigm for agent LLM.
CudaForge is a simple, effective, and low-cost multi-agent workflow for CUDA kernel generation and optimization.
FractalForensics exhibits good robustness and vulnerability to common image processing operations and Deepfake operations.
ScaleNet is a novel approach that extends pre-trained Visual Transformer (ViT) through weight sharing.
FlashMoBA makes the theoretically optimal block size practical, achieving up to 14.7x speedup on GPUs.
CoT Hijacking is a novel jailbreak attack method in which benign reasoning systematically weakens the rejection behavior.
InstanceAssemble enables high-quality and controllable image generation under multimodal conditions.
Layout-to-Image provides a flexible control mechanism for image generation.
HiPO is used for adaptive LLM inference, mainly including hybrid data construction and hybrid reinforcement learning.
As a novel semantic-aware framework, it is used to reconstruct 3D models from sparse views.
AEPO focuses on balancing and rationalizing strategy extension branches and strategy updates under the guidance of high-entropy tool calls.
SDAR establishes a new practical language modeling paradigm that unifies the complementary advantages of autoregression and diffusion.
C2C enables direct semantic communication by transforming and fusing key-value (KV) caches between models.
CapRL can effectively train models to generate more general and accurate image descriptions.
The model approximates the Gödel Machine in a coding agent environment and guides the expansion through Thompson sampling with adaptive scheduling.
The first framework to successfully apply distribution matching distillation to MDM-based text generation, setting a record in few-step language sequence generation.
MultiPL-MoE is an effective method for extending low-source programming languages in the post-pre-training stage.
The Tongyi Qianwen team systematically studied the role of gating mechanisms in standard softmax attention.
The Lancelot framework incorporates fully homomorphic encryption into BRFL to achieve robust privacy protection.
By jointly aligning global and local features, adversarial examples can be effectively guided toward the target feature distribution and transferability can be enhanced.
The receptive field is an important concept for understanding visual information processing and provides a reference for designing, analyzing and optimizing visual models.
SVG enables faster diffusion training, efficient few-step sampling, and improved generation quality.
The NSG statistic quantifies the ratio of spatial probability gradient to temporal density change.
Mem-I has achieved significant improvements over existing memory-enhanced agent baselines in multiple benchmark tests.
SSP demonstrates the potential of self-game theory as a scalable and data-efficient training paradigm for agent LLM.
CudaForge is a simple, effective, and low-cost multi-agent workflow for CUDA kernel generation and optimization.
FractalForensics exhibits good robustness and vulnerability to common image processing operations and Deepfake operations.
ScaleNet is a novel approach that extends pre-trained Visual Transformer (ViT) through weight sharing.
FlashMoBA makes the theoretically optimal block size practical, achieving up to 14.7x speedup on GPUs.
CoT Hijacking is a novel jailbreak attack method in which benign reasoning systematically weakens the rejection behavior.
InstanceAssemble enables high-quality and controllable image generation under multimodal conditions.
Layout-to-Image provides a flexible control mechanism for image generation.
HiPO is used for adaptive LLM inference, mainly including hybrid data construction and hybrid reinforcement learning.
As a novel semantic-aware framework, it is used to reconstruct 3D models from sparse views.
AEPO focuses on balancing and rationalizing strategy extension branches and strategy updates under the guidance of high-entropy tool calls.
SDAR establishes a new practical language modeling paradigm that unifies the complementary advantages of autoregression and diffusion.
C2C enables direct semantic communication by transforming and fusing key-value (KV) caches between models.
CapRL can effectively train models to generate more general and accurate image descriptions.
The model approximates the Gödel Machine in a coding agent environment and guides the expansion through Thompson sampling with adaptive scheduling.
The first framework to successfully apply distribution matching distillation to MDM-based text generation, setting a record in few-step language sequence generation.
MultiPL-MoE is an effective method for extending low-source programming languages in the post-pre-training stage.
The Tongyi Qianwen team systematically studied the role of gating mechanisms in standard softmax attention.
The Lancelot framework incorporates fully homomorphic encryption into BRFL to achieve robust privacy protection.
By jointly aligning global and local features, adversarial examples can be effectively guided toward the target feature distribution and transferability can be enhanced.
The receptive field is an important concept for understanding visual information processing and provides a reference for designing, analyzing and optimizing visual models.
SVG enables faster diffusion training, efficient few-step sampling, and improved generation quality.