Skip to content
@xlite-dev

xlite-dev

Develop ML/AI toolkits and ML/AI/CUDA Learning resources.

Pinned Loading

  1. LeetCUDA LeetCUDA Public

    📚LeetCUDA: Modern CUDA Learn Notes with PyTorch for Beginners🐑, 200+ CUDA Kernels, Tensor Cores, HGEMM, FA-2 MMA.🎉

    Cuda 9.5k 931

  2. lite.ai.toolkit lite.ai.toolkit Public

    🛠A lite C++ AI toolkit: 100+ models with MNN, ORT and TRT, including Det, Seg, Stable-Diffusion, Face-Fusion, etc.🎉

    C++ 4.4k 770

  3. Awesome-LLM-Inference Awesome-LLM-Inference Public

    📚A curated list of Awesome LLM/VLM Inference Papers with Codes: Flash-Attention, Paged-Attention, WINT8/4, Parallelism, etc.🎉

    Python 4.9k 337

  4. Awesome-DiT-Inference Awesome-DiT-Inference Public

    📚A curated list of Awesome Diffusion Inference Papers with Codes: Sampling, Cache, Quantization, Parallelism, etc.🎉

    Python 504 25

  5. torchlm torchlm Public

    💎An easy-to-use PyTorch library for face landmarks detection: training, evaluation, inference, and 100+ data augmentations.🎉

    Python 267 27

  6. ffpa-attn ffpa-attn Public

    🤖FFPA: Extend FlashAttention-2 with Split-D, ~O(1) SRAM complexity for large headdim, 1.8x~3x↑🎉 vs SDPA EA.

    Cuda 246 13

Repositories

Showing 10 of 53 repositories
  • diffusers Public Forked from huggingface/diffusers

    🤗 Diffusers: State-of-the-art diffusion models for image, video, and audio generation in PyTorch and FLAX.

    xlite-dev/diffusers’s past year of commit activity
    Python 0 Apache-2.0 6,777 0 0 Updated Jan 23, 2026
  • sglang Public Forked from sgl-project/sglang

    SGLang is a fast serving framework for large language models and vision language models.

    xlite-dev/sglang’s past year of commit activity
    Python 0 Apache-2.0 4,183 0 0 Updated Jan 22, 2026
  • SageAttention Public Forked from thu-ml/SageAttention

    Quantized Attention that achieves speedups of 2.1-3.1x and 2.7-5.1x compared to FlashAttention2 and xformers, respectively, without lossing end-to-end metrics across various models.

    xlite-dev/SageAttention’s past year of commit activity
    Cuda 0 Apache-2.0 326 0 0 Updated Jan 22, 2026
  • cache-dit Public Forked from vipshop/cache-dit

    A Unified and Flexible Inference Engine with Hybrid Cache Acceleration and Parallelism for 🤗DiTs.

    xlite-dev/cache-dit’s past year of commit activity
    Python 4 Apache-2.0 55 0 0 Updated Jan 21, 2026
  • vllm-omni Public Forked from vllm-project/vllm-omni

    A framework for efficient model inference with omni-modality models

    xlite-dev/vllm-omni’s past year of commit activity
    Python 0 Apache-2.0 325 0 0 Updated Jan 20, 2026
  • ffpa-attn Public

    🤖FFPA: Extend FlashAttention-2 with Split-D, ~O(1) SRAM complexity for large headdim, 1.8x~3x↑🎉 vs SDPA EA.

    xlite-dev/ffpa-attn’s past year of commit activity
    Cuda 246 GPL-3.0 13 1 0 Updated Jan 20, 2026
  • Awesome-DiT-Inference Public

    📚A curated list of Awesome Diffusion Inference Papers with Codes: Sampling, Cache, Quantization, Parallelism, etc.🎉

    xlite-dev/Awesome-DiT-Inference’s past year of commit activity
    Python 504 GPL-3.0 25 0 0 Updated Jan 18, 2026
  • Awesome-LLM-Inference Public

    📚A curated list of Awesome LLM/VLM Inference Papers with Codes: Flash-Attention, Paged-Attention, WINT8/4, Parallelism, etc.🎉

    xlite-dev/Awesome-LLM-Inference’s past year of commit activity
    Python 4,936 GPL-3.0 337 0 0 Updated Jan 18, 2026
  • lite.ai.toolkit Public

    🛠A lite C++ AI toolkit: 100+ models with MNN, ORT and TRT, including Det, Seg, Stable-Diffusion, Face-Fusion, etc.🎉

    xlite-dev/lite.ai.toolkit’s past year of commit activity
    C++ 4,356 GPL-3.0 770 1 0 Updated Jan 18, 2026
  • LeetCUDA Public

    📚LeetCUDA: Modern CUDA Learn Notes with PyTorch for Beginners🐑, 200+ CUDA Kernels, Tensor Cores, HGEMM, FA-2 MMA.🎉

    xlite-dev/LeetCUDA’s past year of commit activity
    Cuda 9,461 GPL-3.0 931 1 0 Updated Jan 18, 2026