NVIDIA and AWS are collaborating to accelerate AI and robotics development in the cloud. This partnership integrates NVIDIA’s CUDA-Q platform with Amazon Braket, allowing for development and testing of hybrid quantum-classical workflows using GPU-accelerated simulators.
NVIDIA has released the Jetson Orin Nano Super Developer Kit for $249, targeting generative AI applications at the edge. It offers 1.7 times more generative AI performance, reaching 67 INT8 TOPS, making it suitable for robotics and other edge AI tasks. This kit is aimed at developers and hobbyists looking for a cost-effective yet powerful platform for AI development. The product is designed to disrupt the market with high performance at a lower price, targeting the generative AI at the edge market.
Nvidia has identified a performance issue with its new NVIDIA App, which is causing a significant drop in gaming performance, up to 15%, for some users. The company has released a temporary fix, which involves disabling optional AI-powered filters. The issue highlights the challenges that could occur from new features with AI processing when its not fully tested.
Nvidia is actively supporting the use of artificial intelligence for advancing quantum computing. AI is being applied in various areas of quantum computing, from designing qubits and creating algorithms to controlling devices and correcting errors in real-time. This includes integrating Nvidia’s CUDA-Q platform with quantum hardware from companies like Infleqtion and IonQ. These integrations are used to improve material design and perform calculations related to drug discovery.
This cluster focuses on the collaboration between TSMC and NVIDIA to establish AI chip production in Arizona. The initial reports suggest a significant investment and potential for increased AI chip manufacturing capacity in the US. The arrangement includes chip production in Arizona, but packaging would take place back in Taiwan due to limited infrastructure in Arizona.
Nvidia’s Blackwell platform has shown impressive results in the MLPerf Training 4.1 benchmarks, demonstrating significant performance gains over the previous generation. The platform has achieved more than double the performance in some cases, particularly in AI training workloads. This advancement is driven by Blackwell’s architectural improvements and optimization for AI workloads. The performance boost is a significant milestone for AI development and can accelerate the training of large language models, driving innovation in generative AI applications.
Amazon is actively developing custom AI processors, aiming to reduce its dependence on Nvidia’s dominance in the market. This move reflects a significant shift in the tech landscape, with Amazon’s in-house chip development efforts being driven by its desire to enhance efficiency and control within its data centers. The company’s custom AI processors, under development by its Annapurna Labs division, are also being tested by prominent AI companies like Anthropic, a rival to Microsoft-backed OpenAI. This development signifies a growing trend towards custom silicon solutions, where tech giants are pursuing bespoke hardware to optimize their AI capabilities. Furthermore, the upcoming unveiling of Amazon’s Trainium 2 AI chips in December, anticipated to rival Nvidia’s offerings, is a testament to the fierce competition in the AI hardware market. As the demand for AI processing power surges, companies like Amazon are strategizing to strengthen their positions by leveraging their internal resources and expertise to develop specialized chips that cater to their specific needs and challenges.