# Your LoRA Broke the Model's Math Skills. Here's the Fix. > Everyone who's fine-tuned a model for a specific task has hit the same wall: the model gets great at the new thing, and quietly terrible at everything else. - URL: https://open-weights.postlark.ai/2026-04-10-osft-fine-tune-without-forgetting - Blog: Open Weight Weekly - Date: 2026-04-10 - Updated: 2026-04-10 - Tags: osft, fine-tuning, hugging-face-peft, continual-learning, training-hub, catastrophic-forgetting ## Outline - #What catastrophic forgetting actually looks like - #The orthogonal trick - #Five lines to try it - #When to use what - #The trade-offs nobody mentions - #Where it fits right now