Stable Diffusion
A latent text-to-image diffusion model.
Run Stable Diffusion natively on Apple Silicon with peak Core ML performance and total privacy.

Mochi Diffusion is a high-performance macOS application designed to execute Stable Diffusion models using Apple’s proprietary Core ML framework. Built specifically for M-series (M1, M2, M3, M4) and Intel-based Macs, it bypasses the traditional Python-based overhead common in AI tools, utilizing the Apple Neural Engine (ANE) for maximum efficiency. In the 2026 market landscape, Mochi Diffusion stands out as a premier solution for 'Local-First' AI, providing users with complete data sovereignty and zero operational costs. Its technical architecture focuses on reducing memory pressure, allowing complex generative tasks to run on hardware with as little as 8GB of unified memory. The software supports a wide range of Core ML converted models, including SD 1.5, SDXL, and specialized fine-tunes. By leveraging the Metal graphics API and the Neural Engine, it achieves generation speeds that rival cloud-based solutions while remaining entirely offline. This makes it an essential tool for professionals in high-security environments, creative hobbyists, and developers who require rapid, iterative prototyping without subscription fees or latency issues associated with remote server clusters.
Mochi Diffusion is a high-performance macOS application designed to execute Stable Diffusion models using Apple’s proprietary Core ML framework.
Explore all tools that specialize in text-to-image generation. This domain focus ensures Mochi Diffusion delivers optimized results for this specific requirement.
Explore all tools that specialize in core ml model execution. This domain focus ensures Mochi Diffusion delivers optimized results for this specific requirement.
Explore all tools that specialize in offline generative ai. This domain focus ensures Mochi Diffusion delivers optimized results for this specific requirement.
Directly maps model weights to ANE instructions to minimize CPU/GPU contention.
Aggressive memory management for loading model shards into unified memory on-demand.
Supports Canny, Depth, and Scribble maps to guide spatial layout during generation.
Allows users to toggle between CPU, GPU, and Neural Engine for different stages of the diffusion process.
Built using Apple's native UI framework for a responsive, non-electron experience.
Supports any SD model converted to Core ML format via the diffusers library.
Extracts and displays prompt/seed metadata directly from generated PNG chunks.
Download the latest .dmg release from the official GitHub repository.
Drag Mochi Diffusion to your Applications folder and launch it to initialize directory structures.
Create a directory named 'MochiDiffusion' in your Documents folder or preferred external drive.
Download Core ML-converted models (.zip or .mlpackage) from Hugging Face or CivitAI.
Uncompress and place model folders into the 'models' subdirectory within your MochiDiffusion path.
Restart the application to refresh the model selection dropdown menu.
Select your target compute unit (CPU & GPU, GPU Only, or All Units including Neural Engine).
Enter your primary and negative prompts into the respective text fields.
Configure generation parameters including Steps, Guidance Scale, and Seed.
Click 'Generate' and monitor the real-time progress bar powered by Metal.
All Set
Ready to go
Verified feedback from other users.
"Users praise the efficiency and UI cleanliness, noting it as the definitive way to use SD on Mac without technical friction."
Post questions, share tips, and help other users.
A latent text-to-image diffusion model.
The definitive open-source interface for professional-grade Stable Diffusion workflows.
Create realistic images and art from a description in natural language.
Specialized latent diffusion model for high-contrast, stylized cyberpunk-ink aesthetics.
The Industry-Standard Modular Framework for High-Performance Generative AI Research and GAN Development.
Professional-grade modern 3D animation aesthetics powered by specialized Stable Diffusion fine-tuning.
A Stable Diffusion XL model fine-tuned for generating high-quality, SFW images.

The premier open-source latent diffusion checkpoint for high-fidelity 2.5D anime and illustrative synthesis.