Luminal is a deep learning library that uses search-based compilation to achieve high performance.
ShowHN
To run the demo shown on HN on mac, clone this repo and run:
cd demos/matmul cargo run --release
Important We're undergoing a large transition to "2.0", which introduces large-scale kernel search. This radically simplifies the compiler stack and allows us to discover complex optimizations entirely automatically. Please keep an eye on breaking changes, which usually are staged in the crates/luminal_2 before being merged into the main crate.
Usage
use luminal :: prelude :: * ; // Setup graph and tensors let mut cx = Graph :: new ( ) ; let a = cx . tensor ( ( 3 , 1 ) ) . set ( [ [ 1.0 ] , [ 2.0 ] , [ 3.0 ] ] ) ; let b = cx . tensor ( ( 1 , 4 ) ) . set ( [ [ 1.0 , 2.0 , 3.0 , 4.0 ] ] ) ; // Do math... let mut c = a . matmul ( b ) . retrieve ( ) ; // Compile and run graph cx . compile ( < ( GenericCompiler , CPUCompiler ) > :: default ( ) , & mut c ) ; cx . execute ( ) ; // Get result println ! ( "Result: {:?}" , c ) ;
Getting Started
Llama 3 8B
the below is a quick example of how you can run Llama 3 8B locally using Luminal to go indepth on this example check out the documentation here
... continue reading