Zoo Labs Foundation Inc is a 501(c)(3) non-profit organization dedicated to democratizing artificial intelligence. We believe that advanced AI training should be accessible to everyone - from individual researchers and educators to small teams and large organizations. Gym is our flagship open-source platform, embodying our mission to break down the financial and technical barriers that have kept AI innovation in the hands of a few.
Our Mission: Make AI training 99.8% cheaper, 1000× faster to deploy, and completely transparent.
Learn from experience, not parameters - Our breakthrough Training-Free GRPO achieves comparable or better performance than traditional fine-tuning by operating in the context space instead of parameter space.
💰
99.8% Cost Reduction
Train for $18 instead of $10,000+. Use 50-100 examples instead of thousands. Complete in minutes instead of hours/days.
📈
Better Performance
+2-5% improvement over traditional fine-tuning on AIME math benchmarks. Achieves 82.7% on AIME24, 73.3% on AIME25.
🔍
Human-Readable
Every learned experience is natural language - transparent, auditable, and explainable. No black box parameters.
🧩
Composable
Experiences are modular - add, remove, or modify without retraining. Share knowledge across models and domains.
🌐
Decentralized
Contribute to global semantic memory via DSO. 31.7× BitDelta compression, Byzantine-robust aggregation.
⚡
Lightning Fast
Minutes to adapt, not days. No GPU clusters required. Runs on CPU or single GPU. Instant deployment.
# Install Gympipinstallzoo-gym
# Train with Continuous Learning GRPOgymtrain\--model_name_or_pathQwen/Qwen3-4B-Instruct\--templateqwen3\--datasetalpaca_en_demo\--finetuning_typelora\--output_dir./output/my-model
fromgym.trainimportrun_sftfromgym.hparamsimportget_train_args# Configure trainingconfig={"model_name_or_path":"Qwen/Qwen3-4B-Instruct","template":"qwen3","dataset":"alpaca_en_demo","finetuning_type":"lora","output_dir":"./output/my-model"}# Run trainingmodel_args,data_args,training_args,finetuning_args,generating_args=get_train_args(config)run_sft(model_args,data_args,training_args,finetuning_args,generating_args)
# Launch web UIgymwebui
# Open browser at http://localhost:7860# Select model, dataset, and training method# Click "Start Training"
Federated Active Inference at Token-Level - Share compressed semantic experiences across nodes with 31.7× BitDelta compression and Byzantine-robust aggregation.
10,000× communication efficiency vs federated learning
Byzantine-tolerant - handles 33% malicious nodes
Privacy-preserving - natural language experiences
On-chain governance - DAO voting for experience quality