SmolHub Playground
Small-scale models, proof-of-concepts, and AI experiments.
-
A PyTorch implementation of a Mixtral inspired transformer model with Mixture of Experts (MoE), designed for text generation and understanding tasks. This model is built...
-
A compact implementation of an Encoder-Decoder Transformer for sequence-to-sequence translation tasks. This project implements a translation model from English to Hindi using the Samanantar dataset....
-
A PyTorch implementation of a DeepSeek V3 inspired transformer model with Mixture of Experts (MoE), Latent Attention, and other advanced features.
-
So, I trained a Llama a 88M architecture I coded from ground up to build a small instruct model, going through the below-mentioned stages from...
-
A PyTorch implementation of a Mixtral inspired transformer model with Mixture of Experts (MoE), Flash Attention, and other advanced features.
-
So, I trained a Llama a 130M architecture I coded from ground up to build a small instruct model, going through the below-mentioned stages from...