Ollama
Models Docs Pricing
Sign in Download
Models Download Docs Pricing Sign in
profile
huihui_ai

[email protected]

default icon huihui-ai
  • deepseek-r1-abliterated

    DeepSeek's first generation reasoning models with comparable performance to OpenAI-o1.

    thinking 1.5b 7b 8b 14b 32b 70b

    672.8K  Pulls 55  Tags Updated  10 months ago

  • qwen3-abliterated

    Qwen3 is the latest generation of large language models in Qwen series, offering a comprehensive suite of dense and mixture-of-experts (MoE) models.

    tools thinking 0.6b 1.7b 4b 8b 14b 16b 30b 32b 235b

    175.3K  Pulls 74  Tags Updated  8 months ago

  • qwen3.5-abliterated

    Qwen 3.5 is a family of open-source multimodal models that delivers exceptional utility and performance.

    vision tools thinking 0.8b 2b 4b 9b 27b 35b 122b

    135.6K  Pulls 58  Tags Updated  1 week ago

  • gemma3-abliterated

    The current, most capable model that runs on a single GPU.

    vision 270m 1b 4b 12b 27b

    79.3K  Pulls 16  Tags Updated  7 months ago

  • qwen3-vl-abliterated

    The most powerful vision-language model in the Qwen3 model family to date.

    vision tools thinking 2b 4b 8b 30b 32b

    73.5K  Pulls 54  Tags Updated  5 months ago

  • qwq-abliterated

    QwQ is an experimental research model focused on advancing AI reasoning capabilities.

    tools 32b

    70.5K  Pulls 17  Tags Updated  1 year ago

  • glm-4.7-flash-abliterated

    As the strongest model in the 30B class, GLM-4.7-Flash offers a new option for lightweight deployment that balances performance and efficiency.

    tools thinking

    52.1K  Pulls 5  Tags Updated  2 months ago

  • exaone3.5-abliterated

    EXAONE 3.5 is a collection of instruction-tuned bilingual (English and Korean) generative models ranging from 2.4B to 32B parameters, developed and released by LG AI Research.

    2.4b 7.8b 32b

    44.1K  Pulls 13  Tags Updated  1 year ago

  • llama3.2-abliterate

    Meta's Llama 3.2 goes small with 1B and 3B models.

    tools 1b 3b

    39.8K  Pulls 11  Tags Updated  1 year ago

  • qwen2.5-1m-abliterated

    Qwen2.5-1M is the long-context version of the Qwen2.5 series models, supporting a context length of up to 1M tokens.

    tools 7b 14b

    39.7K  Pulls 11  Tags Updated  1 year ago

  • qwen2.5-abliterate

    Qwen2 is a new series of large language models from Alibaba group

    tools 0.5b 1.5b 3b 7b 14b 32b 72b

    39.4K  Pulls 46  Tags Updated  11 months ago

  • gpt-oss-abliterated

    OpenAI’s open-weight models designed for powerful reasoning, agentic tasks, and versatile developer use cases.

    tools thinking 20b 120b

    37.8K  Pulls 16  Tags Updated  6 months ago

  • dolphin3-abliterated

    Dolphin 3.0 Llama 3.1 8B 🐬 is the next generation of the Dolphin series of instruct-tuned models designed to be the ultimate general purpose local model, enabling coding, math, agentic, function calling, and general use cases.

    tools 8b

    35.4K  Pulls 5  Tags Updated  1 year ago

  • llama3.3-abliterated

    New state of the art 70B model. Llama 3.3 70B offers similar performance compared to Llama 3.1 405B model.

    tools 70b

    31.8K  Pulls 10  Tags Updated  1 year ago

  • gemma-4-abliterated

    Gemma 4 models are designed to deliver frontier-level performance at each size. They are well-suited for reasoning, agentic workflows, coding, and multimodal understanding.

    vision tools thinking audio e2b e4b 26b 31b 48b

    26.9K  Pulls 29  Tags Updated  yesterday

  • gemma3n-abliterated

    This is an uncensored version of google/gemma-3n created with abliteration

    19.9K  Pulls 2  Tags Updated  9 months ago

  • qwen2.5-coder-abliterate

    The latest series of Code-Specific Qwen models, with significant improvements in code generation, code reasoning, and code fixing.

    tools 0.5b 1.5b 3b 7b 14b 32b

    16.5K  Pulls 31  Tags Updated  1 year ago

  • huihui-moe-abliterated

    Huihui-MoE-abliterated is a **Mixture of Experts (MoE)** language model developed by **huihui.ai**

    tools thinking 1.5b 5b 12b 23b 24b 46b 57b 60b

    12.3K  Pulls 40  Tags Updated  7 months ago

  • phi4-abliterated

    Phi-4 is a 14B parameter, state-of-the-art open model from Microsoft.

    14b

    12.2K  Pulls 5  Tags Updated  1 year ago

  • kimi-k2

    This is not the ablation version. Kimi-K2-Instruct is a state-of-the-art mixture-of-experts (MoE) language model with 32 billion activated parameters and 1 trillion total parameters.

    tools 1026b

    11K  Pulls 4  Tags Updated  8 months ago

  • qwen3-coder-abliterated

    Qwen3-Coder featuring the following key enhancements: Significant Performance, Long-context Capabilities, Agentic Coding.

    30b 480b

    10.4K  Pulls 9  Tags Updated  8 months ago

  • mistral-small-abliterated

    Mistral Small 3 sets a new benchmark in the “small” Large Language Models category below 70B.

    tools 24b

    9,305  Pulls 10  Tags Updated  1 year ago

  • hy-mt1.5-abliterated

    Hunyuan Translation Model Version 1.5 includes a 1.8B translation model, HY-MT1.5-1.8B, and a 7B translation mode

    1.8b 7b

    8,231  Pulls 16  Tags Updated  3 months ago

  • baronllm-abliterated

    This is an uncensored version of AlicanKiraz0/BaronLLM_Offensive_Security_LLM_Q6_K_GGUF created with abliteration

    tools 8b

    7,292  Pulls 4  Tags Updated  10 months ago

  • deepseek-v3

    A strong Mixture-of-Experts (MoE) language model with 671B total parameters with 37B activated for each token.

    7,166  Pulls 2  Tags Updated  1 year ago

  • hunyuan-mt-abliterated

    The Hunyuan Translation Model comprises a translation model, Hunyuan-MT-7B, and an ensemble model, Hunyuan-MT-Chimera.

    7b

    7,149  Pulls 9  Tags Updated  7 months ago

  • openthinker-abliterated

    A fully open-source family of reasoning models built using a dataset derived by distilling DeepSeek-R1.

    7b 32b

    6,454  Pulls 9  Tags Updated  1 year ago

  • magistral-abliterated

    This is an uncensored version of mistralai/magistral-Small-2506 created with abliteration

    thinking 24b

    6,265  Pulls 6  Tags Updated  10 months ago

  • qwen2.5-vl-abliterated

    Flagship vision-language model of Qwen and also a significant leap from the previous Qwen2-VL.

    vision 3b 7b 32b

    6,152  Pulls 16  Tags Updated  5 months ago

  • granite3.2-vision-abliterated

    A compact and efficient vision-language model, specifically designed for visual document understanding, enabling automated content extraction from tables, charts, infographics, plots, diagrams, and more.

    vision tools 2b

    6,010  Pulls 5  Tags Updated  1 year ago

  • phi4-mini-abliterated

    Phi-4-mini brings significant enhancements in multilingual support, reasoning, and mathematics, and now, the long-awaited function calling feature is finally supported.

    tools 3.8b

    5,589  Pulls 5  Tags Updated  1 year ago

  • qwen3-coder-next-abliterated

    Qwen3-Coder-Next is a coding-focused language model from Alibaba's Qwen team, optimized for agentic coding workflows and local development.

    tools

    4,956  Pulls 4  Tags Updated  2 months ago

  • qwen3-next-abliterated

    The first installment in the Qwen3-Next series with strong performance in terms of both parameter efficiency and inference speed.

    tools thinking 80b

    4,809  Pulls 10  Tags Updated  4 months ago

  • dolphin3-r1-abliterated

    Dolphin's first generation reasoning models.

    24b

    4,720  Pulls 10  Tags Updated  1 year ago

  • devstral-abliterated

    This is an uncensored version of mistralai/Devstral-Small-2505 created with abliteration

    tools 24b

    3,634  Pulls 6  Tags Updated  10 months ago

  • deepseek-v3-abliterated

    A strong Mixture-of-Experts (MoE) language model with 671B total parameters with 37B activated for each token.

    671b

    3,630  Pulls 5  Tags Updated  1 year ago

  • deephermes3-abliterated

    DeepHermes 3 Preview is the latest version of our flagship Hermes series of LLMs by Nous Research, and one of the first models in the world to unify Reasoning (long chains of thought that improve answer accuracy) and normal LLM response modes into one mod

    8b

    3,481  Pulls 6  Tags Updated  1 year ago

  • homunculus-abliterated

    This is an uncensored version of arcee-ai/Homunculus created with abliteration

    thinking 12b

    2,993  Pulls 5  Tags Updated  10 months ago

  • seed-coder-abliterate

    This is an uncensored version of ByteDance-Seed/Seed-Coder-8B-Instruct created with abliteration

    2,902  Pulls 5  Tags Updated  11 months ago

  • jan-nano-abliterated

    This is an uncensored version of Menlo/Jan-nano created with abliteration

    tools thinking 4b

    2,866  Pulls 10  Tags Updated  9 months ago

  • am-thinking-abliterate

    This is an uncensored version of a-m-team/AM-Thinking-v1 created with abliteration

    tools 32b

    2,843  Pulls 6  Tags Updated  11 months ago

  • qwenlong-abliterated

    This is an uncensored version of Tongyi-Zhiwen/QwenLong-L1-32B created with abliteration

    thinking 32b

    2,840  Pulls 5  Tags Updated  10 months ago

  • phi4-reasoning-abliterated

    Phi 4 mini reasoning is a lightweight open model that balances efficiency with advanced reasoning ability.

    3.8b

    2,807  Pulls 4  Tags Updated  11 months ago

  • lfm2.5-abliterated

    LFM2.5 is a new family of hybrid models designed for on-device deployment.

    tools 1.2b

    2,765  Pulls 10  Tags Updated  2 months ago

  • tongyi-deepresearch-abliterated

    tongyi DeepResearch, an agentic large language model featuring 30 billion total parameters, with only 3 billion activated per token.

    tools thinking 30b

    2,650  Pulls 5  Tags Updated  6 months ago

  • foundation-sec-abliterated

    Foundation-Sec-8B-abliterated is an uncensored version fine-tuned based on fdtn-ai/Foundation-Sec-8B. Foundation-Sec-8B is an open-weight, 8-billion-parameter foundational language model designed specifically for cybersecurity applications.

    8b

    2,583  Pulls 5  Tags Updated  11 months ago

  • granite3.2-abliterated

    Granite-3.2 is a family of long-context AI models from IBM Granite fine-tuned for thinking capabilities.

    tools 2b 8b

    2,517  Pulls 11  Tags Updated  1 year ago

  • tess-r1-abliterated

    Tess-R1 is designed with test-time compute in mind, and has the capabilities to produce a Chain-of-Thought (CoT) reasoning before producing the final output.

    70b

    2,435  Pulls 9  Tags Updated  1 year ago

  • deepseek-r1-Fusion

    DeepSeek-R1-Distill-Qwen-Coder-32B-Fusion-9010 is a mixed model that combines the strengths of two powerful DeepSeek-R1-Distill-Qwen-based models: huihui-ai/DeepSeek-R1-Distill-Qwen-32B-abliterated and huihui-ai/Qwen2.5-Coder-32B-Instruct-abliterated.

    32b

    2,359  Pulls 6  Tags Updated  1 year ago

  • deepseekr1-qwq-skyt1-fusion

    `DeepSeekR1-QwQ-SkyT1-32B-Fusion` is a mixed model that combines the strengths of three powerful Qwen-based models: huihui-ai/DeepSeek-R1-Distill-Qwen-32B-abliterated, huihui-ai/QwQ-32B-Preview-abliterated and huihui-ai/Sky-T1-32B-Preview-abliterated

    32b

    2,358  Pulls 14  Tags Updated  1 year ago

  • aya-expanse-abliterated

    Cohere For AI's language models trained to perform well across 23 different languages.

    tools 8b 32b

    2,312  Pulls 9  Tags Updated  1 year ago

  • arcee-blitz-abliterated

    Arcee-Blitz (24B) is a new Mistral-based 24B model distilled from DeepSeek, designed to be both fast and efficient. We view it as a practical “workhorse” model that can tackle a range of tasks without the overhead of larger architectures.

    24b

    2,240  Pulls 5  Tags Updated  1 year ago

  • Hermes-3-Llama-3.2-abliterated

    Hermes 3 3B is a small but mighty new addition to the Hermes series of LLMs by Nous Research, and is Nous's first fine-tune in this parameter class.

    tools 3b

    2,223  Pulls 5  Tags Updated  1 year ago

  • acereason-nemotron-abliterated

    This is an uncensored version of nvidia/AceReason-Nemotron created with abliteration Edit

    7b 14b

    2,161  Pulls 9  Tags Updated  10 months ago

  • mirothinker1-abliterated

    MiroThinker v1.0 is an open-source research agent designed to advance tool-augmented reasoning and information-seeking capabilities.

    tools thinking 8b 30b 72b

    2,137  Pulls 18  Tags Updated  4 months ago

  • fara-abliterated

    Fara-7B is Microsoft's first agentic small language model (SLM) designed specifically for computer use.

    vision 7b

    2,074  Pulls 5  Tags Updated  4 months ago

  • qwenlong-l1.5-abliterated

    QwenLong-L1.5, a long-context reasoning model built upon Qwen3-30B-A3B-Thinking, augmented with memory mechanisms to process tasks far beyond its physical context window.

    tools thinking 30b

    1,999  Pulls 8  Tags Updated  3 months ago

  • exaone-deep-abliterated

    This is an uncensored version of LGAI-EXAONE/EXAONE-Deep-2.4B created with abliteration

    2.4b 7.8b

    1,882  Pulls 9  Tags Updated  9 months ago

  • skywork-o1-abliterated

    The Skywork o1 Open model series, developed by the Skywork team at Kunlun Inc. This groundbreaking release introduces a series of models that incorporate o1-like slow thinking and reasoning capabilities.

    tools 8b

    1,873  Pulls 5  Tags Updated  1 year ago

  • deepscaler-abliterated

    A fine-tuned version of Deepseek-R1-Distilled-Qwen-1.5B that surpasses the performance of OpenAI’s o1-preview with just 1.5B parameters on popular math evaluations.

    1.5b

    1,868  Pulls 3  Tags Updated  1 year ago

  • falcon3-abliterated

    A family of efficient AI models under 10B parameters performant in science, math, and coding through innovative training techniques.

    1b 3b 7b 10b

    1,618  Pulls 21  Tags Updated  1 year ago

  • llama3.3-abliterated-ft

    The fine-tuned version of huihui_ai/llama3.3-abliterated

    tools 70b

    1,600  Pulls 4  Tags Updated  1 year ago

  • tinyr1-abliterated

    Tiny-R1-32B-Preview, which outperforms the 70B model Deepseek-R1-Distill-Llama-70B and nearly matches the full R1 model in math.

    tools 32b

    1,565  Pulls 6  Tags Updated  1 year ago

  • orchestrator-abliterated

    Orchestrator-8B is a state-of-the-art 8B parameter orchestration model designed to solve complex, multi-turn agentic tasks by coordinating a diverse set of expert models and tools.

    tools thinking 8b

    1,541  Pulls 5  Tags Updated  4 months ago

  • kanana-nano-abliterated

    Kanana, a series of bilingual language models (developed by Kakao) that demonstrate exceeding performance in Korean and competitive performance in English.

    2.1b

    1,392  Pulls 5  Tags Updated  1 year ago

  • marco-o1-abliterated

    An open large reasoning model for real-world solutions by the Alibaba International Digital Commerce Group (AIDC-AI).

    7b

    1,391  Pulls 5  Tags Updated  1 year ago

  • deepseek-v3-pruned

    DeepSeek-V3-Pruned-Coder-411B is a pruned version of the DeepSeek-V3 reduced from 256 experts to 160 experts, The pruned model is mainly used for code generation.

    411b

    1,354  Pulls 5  Tags Updated  1 year ago

  • granite3.1-dense-abliterated

    The IBM Granite 2B and 8B models are text-only dense LLMs trained on over 12 trillion tokens of data, demonstrated significant improvements over their predecessors in performance and speed in IBM’s initial testing.

    tools 2b 8b

    1,209  Pulls 11  Tags Updated  1 year ago

  • qwq-fusion

    qwq-fusion is a mixed model that combines the strengths of two powerful Qwen-based models: huihui-ai/QwQ-32B-Preview-abliterated and huihui-ai/Qwen2.5-Coder-32B-Instruct-abliterated.

    tools 32b

    1,172  Pulls 14  Tags Updated  1 year ago

  • smallthinker-abliterated

    A new small reasoning model fine-tuned from the Qwen 2.5 3B Instruct model.

    3b

    1,149  Pulls 5  Tags Updated  1 year ago

  • internlm3-abliterated

    InternLM3 has open-sourced an 8-billion parameter instruction model, InternLM3-8B-Instruct, designed for general-purpose usage and advanced reasoning.

    8b

    1,059  Pulls 5  Tags Updated  1 year ago

  • command-r7b-abliterated

    he smallest model in Cohere's R series delivers top-tier speed, efficiency, and quality to build powerful AI applications on commodity GPUs and edge devices.

    tools 7b

    981  Pulls 5  Tags Updated  1 year ago

  • perplexity-ai-r1-abliterated

    A version of the DeepSeek-R1 model that has been post trained to provide unbiased, accurate, and factual information by Perplexity.

    70b

    964  Pulls 10  Tags Updated  1 year ago

  • nemotron-v1-abliterated

    Llama Nemotron, Open, Production-ready Enterprise Models

    tools 8b

    949  Pulls 6  Tags Updated  11 months ago

  • huihui-moe

    Huihui-MoE is a Mixture of Experts (MoE) language model developed by huihui.ai

    tools thinking 1.2b 23b

    949  Pulls 10  Tags Updated  9 months ago

  • openhands-lm-abliterated

    OpenHands LM is built on the foundation of Qwen Coder 2.5 Instruct 32B, leveraging its powerful base capabilities for coding tasks.

    tools 32b

    880  Pulls 7  Tags Updated  1 year ago

  • microthinker

    MicroThinker is an experimental research model focused on advancing AI reasoning capabilities.

    tools 1b 3b 8b

    822  Pulls 12  Tags Updated  1 year ago

  • tulu3-abliterate

    Tülu 3 is a leading instruction following model family, offering fully open-source data, code, and recipes by the The Allen Institute for AI.

    8b 70b

    798  Pulls 9  Tags Updated  1 year ago

  • Qwen3-Coder

    This is not the ablation version. Qwen3-Coder featuring the following key enhancements: Significant Performance, Long-context Capabilities, Agentic Coding.

    tools thinking 480b

    731  Pulls 4  Tags Updated  8 months ago

  • glm-4.7-abliterated

    Advancing the Coding Capability

    tools thinking

    659  Pulls 1  Tag Updated  2 months ago

  • kimi-k2-abliterated

    A state-of-the-art mixture-of-experts (MoE) language model. Kimi K2-Instruct-0905 demonstrates significant improvements in performance on public benchmarks and real-world coding agent tasks.

    tools 1026b

    609  Pulls 6  Tags Updated  3 months ago

  • nemotron-abliterated

    Llama-3.1-Nemotron-70B-Instruct is a large language model customized by NVIDIA to improve the helpfulness of LLM generated responses to user queries.

    tools 70b

    553  Pulls 6  Tags Updated  1 year ago

  • perplexity-ai-r1

    A version of the DeepSeek-R1 model that has been post trained to provide unbiased, accurate, and factual information by Perplexity.

    504  Pulls 2  Tags Updated  1 year ago

  • uwu-abliterated

    UwU is an experimental research model focused on advancing AI reasoning capabilities.

    7b

    460  Pulls 5  Tags Updated  1 year ago

  • glm4.6-abliterated

    Advanced agentic, reasoning and coding capabilities.

    tools thinking 357b

    424  Pulls 4  Tags Updated  4 months ago

  • s1.1-abliterated

    This model is a successor of s1-32B with slightly better performance.

    tools 32b

    373  Pulls 4  Tags Updated  1 year ago

  • s1-abliterated

    s1 is a reasoning model finetuned from Qwen2.5-32B-Instruct on just 1,000 examples. It matches o1-preview & exhibits test-time scaling via budget forcing.

    tools 32b

    348  Pulls 9  Tags Updated  1 year ago

  • devstral-2-abliterated

    123B model that excels at using tools to explore codebases, editing multiple files and power software engineering agents.

    tools 123b

    342  Pulls 5  Tags Updated  3 months ago

  • Lucie-abliterated

    Lucie-7B is a pretrained 7B parameter causal language model built by LINAGORA and OpenLLM-France.

    7b

    333  Pulls 5  Tags Updated  1 year ago

  • fluentlylm-prinum-abliterated

    fluently-lm/FluentlyLM-Prinum

    tools 32b

    254  Pulls 5  Tags Updated  1 year ago

  • deepseek-r1

    DeepSeek's first-generation of reasoning models with comparable performance to OpenAI-o1, including six dense models distilled from DeepSeek-R1 based on Llama and Qwen.

    thinking 671b

    232  Pulls 12  Tags Updated  10 months ago

  • skyt1-abliterated

    Sky-T1-32B-Preview is an experimental research model focused on advancing AI reasoning capabilities

    tools 32b

    213  Pulls 5  Tags Updated  1 year ago

  • huihui3.5

    Huihui 3.5 is a family of models created by remixing and merging multiple Qwen3.5 models of the same size.

    vision tools thinking 67b

    212  Pulls 1  Tag Updated  1 week ago

  • qwen2.5-censortune

    CensorTune with Supervised Fine-Tuning (SFT) to fine-tune the Qwen2.5-Instruct model on 622 harmful instructions in a single fine-tuning iteration, achieving rejection of these instructions and a zero-pass rate for 320

    tools 0.5b 1.5b 3b

    193  Pulls 15  Tags Updated  11 months ago

  • deepseek-v3.1

    This is not the ablation version. DeepSeek-V3.1 is a hybrid model that supports both thinking mode and non-thinking mode.

    tools thinking 671b

    182  Pulls 3  Tags Updated  7 months ago

  • megrez-abliterated

    Megrez-3B aims to provide a fast inference, compact, and powerful edge-side intelligent solution through software-hardware co-design.

    tools 7b

    166  Pulls 6  Tags Updated  1 year ago

  • deepseek-r1-pruned

    DeepSeek-R1-Pruned-Coder-411B is a pruned version of the DeepSeek-R1 reduced from 256 experts to 160 experts, The pruned model is mainly used for code generation.

    411b

    87  Pulls 3  Tags Updated  1 year ago

© 2026 Ollama
Download Blog Docs GitHub Discord X (Twitter) Contact Privacy Terms
  • Blog
  • Download
  • Docs
  • GitHub
  • Discord
  • X (Twitter)
  • Meetups
  • Privacy
  • Terms
© 2026 Ollama Inc.