Meta has introduced SAM 3 and SAM 3D, major upgrades to its Segment Anything Collection that dramatically improve how AI understands and manipulates the visual world.
SAM 3 enables users to detect, segment, and track objects across images and videos using both visual and text prompts, allowing for far more precise edits than previous versions.
The model can interpret detailed language, such as “red baseball cap,” or even complex descriptions involving multiple conditions, making creative editing and visual manipulation easier and more intuitive than ever.
These capabilities are already being woven into Meta’s creative tools, bringing new effects, targeted edits, and richer interactions to apps like Edits, Vibes, and Meta AI.
SAM 3D builds on this progress by reconstructing full 3D objects, environments, and even human bodies from a single image.
With two powerful models, SAM 3D Objects and SAM 3D Body, creators and researchers can generate accurate 3D assets that surpass the performance of previous reconstruction methods.
Meta is already applying SAM 3D in real products, such as Marketplace’s new “View in Room” feature.
She is releasing model checkpoints, datasets, and tools for developers to experiment with on the Segment Anything Playground.
Key Takeaways:
- SAM 3 improves object detection and tracking using text or visual prompts.
- SAM 3D reconstructs 3D objects and scenes from a single image.
- Both models are available to try on the Segment Anything Playground.
You may also want to check out some of our other recent updates.
Wanna know what’s trending online every day? Subscribe to Vavoza Insider to access the latest business and marketing insights, news, and trends daily with unmatched speed and conciseness! 🗞️





