Please turn JavaScript on
header-image

NVIDIA Newsroom

follow.it gives you an easy way to subscribe to NVIDIA Newsroom's news feed! Click on Follow below and we deliver the updates you want via email, phone or you can read them here on the website on your own news page.

You can also unsubscribe anytime painlessly. You can even combine feeds from NVIDIA Newsroom with other site's feeds!

Title: Home | NVIDIA Newsroom

Is this your feed? Claim it!

Publisher:  Unclaimed!
Message frequency:  9.53 / week

Message History

For 25 years, the NVIDIA Graduate Fellowship Program has supported graduate students doing outstanding work relevant to NVIDIA technologies. Today, the program announced the latest awards of up to $60,000 each to 10 Ph.D. students involved in research that spans all areas of computing innovation. Selected from a highly competitive applicant pool, the awardees will Read Article

Read full story

Developers, researchers, hobbyists and students can take a byte out of holiday shopping this season as NVIDIA has unwrapped special discounts on the NVIDIA Jetson family of developer kits for edge AI and robotics — available through Sunday, Jan. 11. Whether tapping into the breakthrough capabilities of Jetson AGX Thor, the versatility of Jetson AGX Read Article

Read full story

GeForce NOW is decking the digital halls with 30 new games to keep spirits high all month long. Join the fun with Hogwarts Legacy, the LEGO Harry Potter Collection and a sleighful of new adventures streaming straight from the cloud. The “Half-Price Holiday” sale keeps the savings rolling after Black Friday, with premium GeForce NOW Read Article

Read full story

The top 10 most intelligent open-source models all use a mixture-of-experts architecture. Kimi K2 Thinking, DeepSeek-R1, Mistral Large 3 and others run 10x faster on NVIDIA GB200 NVL72. A look under the hood of virtually any frontier model today will reveal a mixture-of-experts (MoE) model architecture that mimics the efficiency of the human brain. Just Read Article

Read full story

Today, Mistral AI announced the Mistral 3 family of open-source multilingual, multimodal models, optimized across NVIDIA supercomputing and edge platforms. Mistral Large 3 is a mixture-of-experts (MoE) model — instead of firing up every neuron for every token, it only activates the parts of the model with the most impact. The result is efficiency Read Article

Read full story