Fully vertically integrated AI cloud platform Nscale partnered with Singaporean telco Singtel to unlock both companies’ GPU capacity across Europe and Southeast Asia.
New Relic integrates with NVIDIA NIM inference microservices to help deliver high-performing models optimised for NVIDIA GPUs
New Relic AI monitoring provides in-depth insights across the AI stack for apps built on NVIDIA NIM
New Relic’s platform centralises data from 60+ AI integrations to provide comprehensive observability
Snowflake has announced a new collaboration with NVIDIA, helping customers and partners rapidly build bespoke AI solutions leveraging NVIDIA AI.
At Intel's pre-Computex, Taipei Tech Tour, the new Lunar Lake processors were officially announced.
AI data platform company Vast Data collaborated with cloud networking solutions provider Arista Networks to offer optimised AI infrastructure, combining ethernet switching with a performant platform that can scale to meet the needs of enterprises and GPU cloud service providers.
Northern Data Group’s GenAI cloud platform, Taiga Cloud, brings comprehensive, in-region AI-as-a-Service offering to Europe by leveraging NVIDIA GPUs and the VAST Data Platform
The world's largest software repository, GitHub, has introduced new updates for GitHub Actions, bringing stronger security and increased power to GitHub-hosted runners. These include Azure private networking, GPU-hosted runners for machine learning, and more. The new features bring a huge boost for enterprise customers.
Processor manufacturer AMD says AI is the most transformational technology in 50 years, and that the biggest driver of this has been generative AI. However, the amazing things AI can achieve are constrained by the availability and capability of GPUs - so, to accelerate AI, AMD has today announced its brand new AMD Instinct MI300X accelerator, bringing the highest performance in the world for generative AI.
COMPANY NEWS: Vast Data, the AI data platform company and Lambda, a leading Infrastructure-as-a-Service and compute provider for public and private GPU infrastructure, today announced a strategic partnership that will enable the world's first hybrid cloud experience dedicated to AI and deep learning workloads.
The data cloud Snowflake may soon be the apps-and-data cloud, with the company announcing Snowpark Container Services and the ability to run any application on Snowflake’s compute cloud.
COMPANY NEWS: AMD CEO, Lisa Su opened AMD’s, major announcement in San Francisco which focused on new data centre processors and AI. You can watch everything that happened in the videos below. Transcripts have been included. The AI announcements can be seen here.
Good morning. Good morning. How's everyone doing this? Morning. It is so exciting to be back here in San Francisco with so many of our press and analysts and partners and friends in the audience, and welcome to all of you who are joining us. Across the world. We have a lot of new products and exciting news to share with you today, so let's Go ahead and get started.
At the end, we're focused on pushing the envelope in high performance and adaptive computing to create solutions to the world's most important challenges. From cloud and enterprise data centres to 5G networks to AI, automotive, healthcare, PC's and so much more, AMD technology is truly everywhere and we're touching the lives of billions of people every day. Today, we're here to talk about our newest EPYC data Centre processors.
Our upcoming instinct accelerators and our growing AI software ecosystem. Now taking a look at modern data centres, what you really. Need is the highest performance. Compute engines across the board. Today we lead the industry with our EPYC processors, which are the highest performing processors available. We also offer the industry's broadest portfolio, which allows us to really optimise for the different workloads in the data centre. Whether you're talking about instinct, GPU accelerators built for HPC and AI, or you're talking about Fpgas or adaptive Socs or smart mics and GPUs from our Xilinx and Pensando acquisitions.
What we'll show you today is how we bring all that together and really expand our portfolio with our next generation data Centre and our AI offerings. Now, since launching EPYC in 2017, we have been laser focused on building the industry's best data centre CPUs. Is now the industry standard in cloud given our leadership, performance and TCO across a wide range of workloads. Every major cloud provider has deployed EPYC for their internal workloads as well as their customer facing instances. Today, there are more than 640 EPYC instances available globally, with another 200 on track to launch by the end of the year. Now looking at the enterprise, EPYC adoption is also growing and especially for the most demanding and technical workloads. Whether you're talking about financial services or telecom or technology or manufacturing or automotive customers and many, many more, they're really choosing EPYC based on our performance, our energy efficiency. And our better total cost of ownership.
And that momentum is. Just growing as we ramp our 4th Gen EPYC Genoa processors. General features up to 96 high performance, 5 nanometers, and four cores. It has the latest IO that includes PCI Gen 512 channel. DDR5 memory and support for CXL. We launched Genoa actually last November and it had leadership, performance and efficiency. And since then there. Have been other products that have come to market. But if you look today, genera is still by far the highest performance and the most efficient processor in the industry.
So let's just take a look. At some metrics for Genoa. Starting first with the cloud, integer performance is key. Using spec and freight and comparing to the competitions top of Stack, EPYC delivers 1.8 times more performance. Looking at the enterprise, if you look across Java workloads or virtualization or ERP workloads, 4th Gen EPYC is up to 1.9 times faster. And perhaps the most important piece is in modern data centres, energy consumption has become just as important as overall performance. And when we designed general, we actually designed that with that in mind. The idea that yes, we want leadership performance, but we must have best in class energy efficiency.
And that's what 4th Gen EPYC does we deliver up to 1.8 times more performance per Watt than the industry compared to the using the industry standard spec power benchmark. And what that means is that Genoa is by far the best voice for anybody who cares about sustainability. So when we talk to customers, many of them are telling us actually they need to refresh their data centres and they really. Need to consolidate. And get a better footprint and get a. Better operating cost?
This is actually. The perfect place for gentlemen and it really. Shines for these types of consolidations. Now looking at. AI, we're going to talk about a GPUs shortly, but actually today the vast majority of AI workloads are actually being run on CPUs. In general, is also the best CPU for AI. The best way to look at AI performance is actually to look at a broad set of end to end workloads, and so we use the industry standard TPC XA benchmark that actually looks at end to end AI performance across 10 different use cases and a host of different algorithms. And what we. See, in this case is EPYC is 1.9 times more performance than the competition. Now you can. See, I'm tremendously excited about Genoa and all of the applications that our customers are running on Genoa, but it's best to hear directly from our partners.
So I'm really excited to introduce our first guest, one of our most important cloud partners to talk about how they're deploying Genoa in the public cloud. Please welcome our good friend. AWS Vice President Dave Brown. Thank you so much for being here with us...
Lisa Su
Today, yeah, yeah. We have been so much has been going on. You know, we've been on this long journey and partnership together. Can you talk a little bit about our partnership?
David Brown
Lisa and thank you for having me. Thank you. For the opportunity of being here today. I'm excited to. Talk about our partnership and how AWS and AMD are continuing to collaborate to advance technology. For our customers and AWS, we're constantly innovating on behalf of. Our customers and have helped. The most reliable and secure global cloud infrastructure. With the broadest. And deepest portfolio of instances to support virtually every type of customer workload. AMD and AWS has been on a journey together since 2018, when AWS was the first to introduce the AMD EPYC based instance in the cloud, delivering 10%.
Savings over comparable X86 EC 2 instances. And as customers used and benefited from these instances, they requested additional and based instance types to run a broader set of applications. And together we have introduced over 100 AMD EPYC based Amazon EC2 instances. Your purpose with acute intensive and memory intensive workloads and justice. Last year we introduced our first instance optimised specifically for high performance computing based and processes HPC 6A and that delivered up to 65% better performance over comparable EC2X86 based compute optimised instances. Workloads such as computational fluid dynamics.
Lisa Su
Hey, we love the work with your team. I mean, it has been such a journey over all these years and we love the 100 plus instances now, you know the breadth of your offerings is amazing that we always talk about what can we do with our current customers and how do they pick from our technology. So can you tell us a little bit about customers?
Dave Brown
Absolutely. So we have a broad range of customers. It's significantly from the cost savings with. AMD based EC2 instances. For examples of how enterprise customers have invested these cost savings into innovation to improve their businesses include TrueCar. A digital automotive marketplace. One of my favourite tools, they sought ways to operate more efficiently and increase development velocity so they can invest the money saved into innovating for car buying experience and TrueCar optimised its AWS. Other industry infrastructure I. Have to 25% from a combination of choosing the AAD instance family for its Co infrastructure.
As well as right sizing instances with. A WS recommendation tools. A sprinkler. Another customer is a purposeful web platform for businesses to manage customer experiences on modern channels. But with the scale. At which sprinkler operates. It is to optimise its robust architecture for cost and performance. A sprinkler has been an early adopter. Of our first generation. AMD based DC-2 assistants for general purpose. Uploads are M5A instance and when they move to Amazon EC2M6 say the next generation sprinkler saw 22% faster performance and 24% cost savings over the previous generation. And then in the HPC space DTN, they run weather and data models that deliver sophisticated high resolution outputs. That require could require continuous processing for vast amounts of data from inputs across the globe. A DTN users Amazon EC2's HPC 6 instances, powered by AMD's EPYC processes to run computer intensive HPC workloads and through the agility, elasticity and efficiency of running HPC workloads on AWS. He has effectively doubled his high resolution global weather weather modelling capacity from two times a day to four times a day.
Lisa Su
I love what we're doing together with customers. It's really great to hear those examples. I think both AMD and Amazon and we've really shared this passion for enabling our customers to do more without our technology. Now I'm even more excited, Dave, about what we're doing next together. So can you talk a little bit about what's next? For our partnership, absolutely. So one of the.
Dave Brown
Things we can. Continue to see is increasing demand from customers to run workloads faster while getting better price performance in AWS and to help address customer needs, we're building new EC2 instances enabled by the unique combination of the 4th generation and EPYC processes together with the AWS Nitro system. Now the AWS Nitro. System is the. Foundation for every single EC2 instance, enabling us to deliver performance, secure and efficient infrastructure. And by combining the 4th generation AMD EPYC processes and the. AWS Nitro system. We've unleashed the full capability of the next Gen AMD processors. And deliver a significantly good performance. For our customers.
Lisa Su
Yeah, I mean, we're so excited about. What we're doing together with. You, with Genoa, with Nitro, with all of. Your resources at AW S. Let's talk about what that means for customers.
Dave Brown
Well, today we're very excited to announce the preview of Amazon EC2M7 a general purpose. Licences powered by the 4th generation and the EPYC processor. And based on our special. And based on our spec, infect. M7A. Has been designed to provide the best X86 performance and price performance per VCPU within the Amazon EC2X86 general purpose. M78 instances offer. A major leap. In performance with. 50% more computer performance than M6A the previous generation.
We think workloads including financial applications, application service, video transcoding, simulation modelling they would. All benefit from the M7A. M7A. Instances also offer. New processor capabilities such as AVX 3512. The DN. I B4. 16 to enable customers to get additional performance and bring an. Even broader range. Of workloads to AWS. And as I mentioned earlier. M7A instances are in preview today. Customers can sign up for the preview and with general availability coming in Q3. Of course, AWS will be bringing Genoa tomorrow. These two instances. So our customers. Can do more with this new. Level of performance overtime. Lisa, we're also very excited. That AMD will be using these instances.
Lisa Su
Yeah, I mean, absolutely, Dave by the way. Did you guys hear that? He said 50% more computer performance. It's just amazing Gen on Gen performance and we're so excited that we're going to see Genoa throughout. This C2. We truly appreciate the partnership with AW that they've you know, they're such. An important partner. For our own IT environment as well. So we're using AWS today for our data analytics workloads and we appreciate all of your flexibility and capabilities there, but with the new general. Expanded instances, we're going to expand our partnership to include some of our highest performing technical workloads such as EDA.
Dave Brown
Well, Lisa. Really great to be here with you to kick. Off this event. We're excited about the performance and the price performance benefits. We're able to deliver for. Our customers and I can't wait to see how our joint customers will innovate with this technology.
We're super excited with the new M78 general instances reaching public preview and you know, we really believe that this is a step function improvement. In performance that you can get in the public. Cloud, we're looking forward to delivering even more capabilities for our customers with these cases. And when you look across the industry, we're actually really pleased with the response that we're getting on Genoa. Based on the leadership and also delivering across a broad number of build purpose server workloads. So I also want to. Say today that Oracle is announcing today also new Genoa, standard HPC and dense IO instances that are expected to enter general production starting in July.
Now I'll say. Overall, general is ramping very nicely and you'll see a number of other public instances and customers coming up. Over the coming weeks and months. But what I did say earlier is that. Data centre workloads. Are becoming increasingly more specialised, requiring optimised computing solutions. Use and of course AI accelerators and that. What makes AMD special? Our breadth of our data centre and I compute portfolio actually provides a significant edge because you can use the right compute for the right workload.
So now let's talk about plotting. Cloud native workloads are actually a very fast growing set of applications. They're really, let's call it, born in the cloud. They're designed to take full advantage of new cloud computing frameworks, and they run as kind of like microservices. So you split up sort of large amounts of code and you make put them into smaller processes and then they can be scaled independently to enable 24 by 7 uptime. The optimum design point for these processors are actually different than. General purpose computing. They're actually very throughput oriented and they benefit from the highest density and the best energy efficiency.
So all of these factors actually prove the development of Bergamo. Bergamo is actually our first EPYC processor designed specifically for cloud workloads. So let me tell you a little bit about Bergamo leverages all of the platform constructure that we already developed for Genoa and it supports the same next Gen memory and the same IO. But it allows us with this design point, to expand to 28 cores per socket for leadership, performance and energy efficiency. In the cloud. Now I'm very happy to show you, drew, can I Have my chip please?
As you guys know, I love. My chips, our chips, our. Chips. I'm very happy to show you Bergamo. This is our new cloud native processor and what we have here is actually a new compute die. So the compute dye is different from Genoa using our triplet technology. Each of these eight compute dyes has sixteen of our. Zen 4 cores and then.
We use the same 6. Nanometer IO dye used by general. In the centre. So let me talk a little bit about how we do this. If you take a look at the core, the 74C4 is actually an enhanced version of the Zen 4 core. It's actually a great example of our modular design approach. When we originally designed the Zen 4 core. Optimised for the highest performance before Zen 4C is actually optimised for the sweet spot of performance and power, and that actually is what gives us the much better density and energy efficiency. And the way we accomplish this is that we actually start from the exact same RTL design as Zen four, and that gives us 100% software compatibility.
And then we optimise the physical implementation of. Zen 4C. For power and area, and we also redesigned the L3 cache hierarchy for greater throughput. So if you put all this together, the result is a design that has 35% smaller area and substantially better. Performance per Watt. Now from a product standpoint, what does this mean? What it means is the only real difference between general and Verbo is. The CD portrait look. So we use the same socket. We swap out the Denos CPU triplet and we put in the bergamot CPU triplet and what you have is.
Each of the 8 compute. Triplets on Bergamo contains twice the number of cores. As was on Genoa and. That's how we get to 128 cores per socket. But importantly, as I said, it's fully software compatible and it's also fully platform compatible with general. And what that means for customers is they can easily deploy either bergamote or Genoa, depending on their overall compute needs and their overall workloads. And so we really tried to leverage the overall platform investment in AMD. So let me show you some of the. Performance metrics on. Bergamo, if you compare Bergamo or competitions top of stack what you'll see is just incredible performance. We're delivering up to 2.6 times more performance across a wide range of cloud native applications, whether you're talking about web front end or in memory.
Analytics or very heavy transactional workloads? And then if you. Look beyond that. In terms of, you know, looking at sort of the overall density of? First of all, again, is significantly better than the competition in compute density and energy efficiency. So what we see is more than double the number of containers per server and two times the energy efficiency in Java workloads. Now as you can tell, we are incredibly excited about bergamot as well and the benefits it will bring. To our cloud customers. So I'm happy to say that bergamot is shipping in volume now to our hyperscale customers. And as I said earlier, I always like. To talk about how customers are using our solutions. So to hear more about how one of the world's largest cloud companies plans to deploy Bergamo and. Welcome, minus Vice President of Infrastructure Alexis Björlin. To the stage.
Lisa Su
I mean the partnership that we've had with Meta is just incredible. I mean, you guys are known for really pushing the envelope and hyperscale computing and you know really combining not just the engineering leadership but also you know the commitment to open standards and. Slightly running your infrastructure at scale for like you know lots of. Applications and billions of people so can. Share a little bit about how we're working together.
Alexis Björlin
Absolutely. So as you know, Meta and Andy have been collaborating on EPYC server design since 2019. These collaborations have been expanding over time with Milan and Genoa and now Bergamo. We work closely to customise AMD's EPYC architecture to meet Metas, power efficiency and compute density requirements. These optimizations include all layers of the hardware and software stack, including Zen cores, SoC composition, firmware, kernel performance, telemetry and software to deliver best in class performance per TCO. For our compute infra.
We've also shared our learnings around reliability and maintenance and have helped improve the EPYC to server designs for all hyperscale deployments as well as, as you know, with all of our platforms we open source and we open source the AMD Milan based server design via the Open Compute Project in the past and we intend to do the same. With our latest Bergamo. Generations high volume servers, so we really appreciate working. Together with your team. And your support.
Lisa Su
On this. Oh, absolutely. Alexis, you have a commanding team. Let. Me say, but we love the. Work that we do with your team. I mean, you know. The point infrastructure at your scale does present some. Unique challenges and. And we learned a. Lot along the. Way you know can. You talk a little bit about some of the work we've done together. To address those requirements.
Alexis Björlin
Absolutely. And as you know, we've deployed hundreds of thousands in AMD servers and production across our global data centres and fleet that are running thousands of workloads in service of WhatsApp. Instagram, Facebook, and across. Our product groups. We've also deployed AMD servers for video transcoding and storage systems and are using AMD CPUs on our AI compute platforms. So as you know, we've shared our workload learnings together with AMD and are working together as we address issues and scaling as they arise.
And you're right, you know. Our scale is massive and our scale and our generational grant rate naturally strains our suppliers. But early on in our partnership, we had concerned, you know, we had concerns about a. We have about. Andy's ability to scale alongside us and meet our demand as we aggressively built out our data centres. But over the years, AMD. Has consistently delivered to meet these commitments, whether with your supplies or your technical product road map innovation. So we've been thoroughly impressed. And we've learned that we can rely on Andy to deliver time on time again.
Lisa Su
I really want to say, on behalf of all of our engineers, thank you for that. You know, we worked really hard. We truly value our partnership and you know, as we've said before, we love learning, innovating and code developing with our partners and some of the insights. Have actually helped us shape what bergamot should be. So as one of the leading cloud companies talk about Bergamo and how it fits into your plans.
Alexis Björlin
Absolutely we are incredibly excited to be preparing to deploy Bergamo as our next generation high volume generative compute platform for meta. We're seeing significant performance improvements with Bergamo over Milan on the order of two. And 1/2 times.
Lisa Su
I'm sorry, did you say Two 1/2 times?
Alexis Björlin
TCO improvements over Milan as well, so you make it pretty easy on me, Lisa. We love products that make both our technologists and our business teams and happy. So building upon the core silicon innovation that AMD has enabled with Birdman, we've partnered with. The several other optimizations that help our workloads, including dense compute chiplets recorded cache ratios, power management and manufacturing optimizations that help us pack a high number of these servers into a rack and deliver rack level performance for TCO improvements as well, so that the flexibility of your chiplet strategy from Bergamo we're also pleased to have an IO intensive server. Option that we. To leverage for HTD and flash storage platforms. So with our focus on enabling new products for our customers as well as capital efficiency with thrilled to unlock the benefits of bergamot for our entire. Family of apps.
Lisa Su
Well, again, thank you, Alexis. We are so excited to be working closely with you guys on promo and really looking forward. Not just deploying bergamot, but also all that we'll do together in the coming years. So thank you again for joining me. Today and thanks for your partnership.
Alexis Björlin
Thanks so much, Lisa.
Lisa Su
You know, we're so excited to hear stories like this where, you know, meta is deploying, you know, broadly bergamot. But most importantly, it's really broadly across the spectrum, including things like, you know, Facebook, Instagram, WhatsApp. That we use every day. As well as a. Number of other services. So as you. Can tell we're incredibly proud of. Our 4th Gen EPYC family, but. There's a bit more. On the CPU side here, so to tell. More about how we expand our portfolio to deliver leadership in technical computing workloads. Let me invite Senior Vice President. And general manager. Of AMD server business Dan McNamara to the stadium.
Dan McNamara
Thank you, Lisa, and good morning, everyone. So one year ago, June, we rolled out our portfolio for EPYC. Different workflows and we are super excited today to begin. Two new products. You just saw how we optimised Fortune EPYC for for cloud native computing with Bergamo, I'm going to spend some time showing you. How we also optimised? EPYC for a different set of data centre workloads. Technical computing.
So for enterprises. And firms that design and build physical products, engineering simulation. Is business critical? These companies need the top engineers in the industry supported by the best computing infrastructure. Companies that can move faster and more efficiently are differentiated by getting the market faster, with more innovative and higher quality products and deliver this under a reduced OpEx budget. So with these goals in mind, we developed our second generation of the AMD 3D V cache using the same integration cache on core chiplets.
But we're now supporting more than a GB of L3 cache on a 96 core CPU. A larger cache feeds the CPU. Faster with complex data sets, and there's a new. Dimension to processor and workload optimization. We first introduced this technology. Last year with Milan X. And now we bring it to Fort Gen EPYC pairing. It with the high performing Zen 4. Core that you. Just heard about. So today I'm super excited to be announcing availability for Gen EPYC processors with AMD3DB cash code name Genoa X. Performance studies from 16 courts and. 96 courses that are socket. Compatible with general.
General acts helps unlock the potential of the world's most important and demanding workloads and technical computing. Now let's spend a minute on these workloads. From aircraft engines to the most advanced semiconductors, the rapid design and simulation of new products is imperative. In today's market. So I'll Genoa is the fastest general. Purpose silver processor in the market. Genoa X takes this performance to. A new level for technical computing. And we're delivering all of those performance in conjunction with our partners, including digital manufacturing software from Altair, Antis and Disso and EDA software from companies like Cadence, Seniors and Synopsis.
We continue to work closely with these solution fibre spray and optimised environment for our mutual customers. Now let's take a look. At some of the performance you'll see with these solutions. Let's start with some widely deployed CFD and FDA workloads. In blue, you see our high score count General X processor in grey you see the top stack signal processor. What this data shows is that across these applications, the 90. 6 core general. Across the delivers more than double the performance. Maybe you're comparing? Props with the same number of cores.
The performance advantage remains is very, very clear. So all this performance and software will be qualified in service from the industries top OEMs. And the platform. Featuring general X will be available next door. We see surely appreciate our software OEM partnerships as we increase the number of solutions to further serve the technical purity market. With industry-leading performance and efficiency. So companies can also leverage the public cloud to run these simulations at top performance. So for more on that, I'd like to welcome Nidhi Chappell from Microsoft to the stage.
Dan McNamara
Thank you for joining us so. We have a strong. Partnership on technical compute across Azure and I'd love for you to share with. The audience about this partnership and our treatment.
Nidhi Chappell
So far, yes. We've been on a mission together for some time now, Microsoft, and indeed we have a strong collaboration and we have a joint goal. We wanted to make sure that we could deliver. Unprecedented performance for high performance compute. As our enterprise customers wanted to accelerate their digital transformation, they wanted. Make sure that critical workloads like each PC could come along and really benefit from the scale and reliability and efficiency of cloud. So on that call, we started our partnership back in 2019 with the introduction of our first hvi that featured first Gen Epic processors.
This is the time we ran 10,000 core simulation. And we thought, wow, we can run. 10,000 cores and. Then we upped our own game in 2020. We launched our. Second generation processor with the second second generation Epic. We got into the top 10. Super computers and we started to really catch momentum. In the market. We started building on our momentum and in 2021 we had the third generation HP series that went live to customers across the planet. The day Milan was launched. And last year, we enhanced this even further, right? We actually announced that we would upgrade our uh, the third generation series with the Indies 3D, we cache which provides 80.
More performance for our customers. At no additional cost. In just four years, we have delivered 4X performance for all of our HPC customers.
Dan McNamara
So I must admit with. HPV 3 and MULLANIX, we did have a lot of fun as two as two teams, but and we also brought a lot of ISP partners and customers to the table and it was really an exciting product but. I think we. Have some more exciting news to share today? So aren't you? Tell us about the. Future of Azure computing.
Nidhi Chappell
Absolutely. So today we are announcing the general availability. Of 4th generation of our HPC. Egg along with that. We have a new mini optimised HPC virtual machines which we're. On the Azure HXC series now both of our HV4 and our HX actually featured the AMD three DV cache. We're just talking about and it also features that with our Internet and offering. Which allows our HPC workers to actually scale very well. Now if you look at our 4th generation HBO series, it offers 1.2 terabytes per second of memory bandwidth. You pair that up with the two axe improvements that we have seen in compute density, and now suddenly we can deliver 4 1/2 X. Faster HPC workloads. This can be workload.
Weekly Dynamics, financial modelling, weather simulation. Virtualized rendering of Falstaff. And this is. The beauty of actually combining the efficiency we get from A&D and the the scale of cloud. That's on our HP V4 on the Ajax series, we are actually taking the offering, the ultra low latency memory latencies along with the 3D V cache and a massive 1.4 terabytes of system cash. Now for some of the. Workloads that are. Any data intensive like silicon design structure analysis, this will deliver 6X performance which is phenomenal. So for a lot of these customers, what this means is they can now fit a lot of their existing workflows either on the same number of cores, fewer number of cores. And overall have a much better total cost of ownership because we save a lot on the software licences.
So really see 66 performance, a lot more savings and you pay that up with best in class Azure managed file system. Azure offerings on orchestration of our workloads and end to end our customers will see significant performance in the cloud just fantastic.
Dan McNamara
So one of the things. You and I talk about a. Lot is how do we enable? Our customers to solve their biggest problems. And I thought it might be a good opportunity to talk about some customers with HP for.
Nidhi Chappell
Absolutely. The true test ultimately is the customer adoption. So I have two customers that I wanted to talk about today. One is petronet. So, as everybody knows, Petronas is a global energy company. They spend hundreds, hundreds of countries worldwide, and they're actually the first one to use the new 4th generation. Series now with petronet. They are trying. To see how they can take the. Upstream work, you know, this is where they do highly quantitative. Interpretation seismic processing. And these workloads really need. Massive memory bandwidth. They need the capability of high performance computing. And this is where. And we worked very closely actually with AMD to make sure that we could bring these new VMS.
We could actually combine with. Combine that with a lot of our AI tools and really accelerate the work that they are doing in geophysicists there and help them make decisions faster. Along with performance though, you know Petronas also has a commitment to get to a corporate sustainability object objective and with Azure, because we are going to be 100% renewable energy by 2025, we not only allowed petrol to actually. Get to their performance objectives, but we are also getting them to be able to get to net 0 carbon emissions by 2020. Fifty 2050 for that. All in all. What this means is as customers look to performance scalability of cloud, they can really benefit from the offerings that we have.
This was on the HPV 4 after I look at the Ajax seeds, the multimedia announced HX series, I want to talk about St Michaels. So St Micro is a leading semiconductor company and they are actually the first ones to use our Azure X virtual series. Again, this is a brand new offering from US and St Micro is going to use this for designing their next generation of chips. You know, so a lot of their RTL simulations now. RTL simulation especially you know as the process technology becomes deeper and deeper, requires much, much lower memory latency and large memory footprint, which is perfect for Ajax series. So what?
Etx allowed a St micro to do is they were able to pack in a lot more of their simulation. Jobs in each VM, which in turn meant they needed fewer VMS. They could do it far more efficiently with it, with some experiments that they have done. St Micro has been able to save the simulation town. Time down by 30%. So what that means is, you know, there's silicon engineers can actually look at a lot more design possibilities. They can improve product quality because they're now doing a lot more validation, but. Ultimately, they can bring products to market faster and they don't have to worry about anything. They can do all of this in cloud because.
Dan McNamara
Yes, it's just fantastic. So it's great to see. Not only HDP 4 come to. Life, but see the. Customers adopting across industries, so really exciting. And we can't wait to help grow this going forward. And I just want to. Say thank you for the partnership and thank. You for coming to see us today.
Dan McNamara
OK. Another just great example. Of how we're partnering with our customers to as far. Better own customers so. The VM is based. On Genoa, X are now available with Microsoft HPV 4 and HXX. The power companies that bring. The most innovative products to market? So General X is just one example. Of how we're optimising for different workloads. You also heard more about general. And bergamo. Today, our final piece of the. Zen 4 portfolio with Ciena. Which is optimised for telco and edge workloads. Sienna will deliver maximum performance and energy efficiency in a cost optimised package and. We will tell you all about this. Later this year. We'll bring that to. Market in the. Second half of. The year.
So now. Let me turn it over to Forrest Norrod, GM Data Center Solutions Business Group to talk about how the modern data. Centre is evolving and what that? Means for data centre infrastructure. Thank you.
Forrest Norrod
Thanks in AMD is delivering the industry's best set of workload optimised CPUs. Beyond the CPU, the workload optimised data centre needs to be 1 where every high value workload is supported by high performance engines. Is energy efficient and is adjuvant, meaning it is easy to deploy, manage and secure. That's our vision of the data centre. And I'd like to bring on Jeff Marrone of Citadel Securities to talk about their workload, optimised data centre built within the Jeff welcome. Jeff, thanks for coming and and maybe you could tell us a little bit First off about about Citadel and Citadel Securities.
Jeff Maurone
Citadel is really two firms. First, we are the world's most profitable hedge fund managing about $60 billion, but Citadel Securities is the world's largest market making firms. This is where. I work and what I'm here. To talk about what does that mean so as. A market making. Firm we provide buyers and sellers financial investors opportunities to buy or sell any asset as competitive price. This at massive scale, so on any given day in the US equities market, 25% of shares that change hands pass through one of our systems. We do that for our equities options, ETFs, treasuries, and a variety of other assets that exchanges around the. So now if you think. About us from a technology perspective.
It's best to think of us as real time predictive analytics. We developed complex pricing models that predict where the market's going sometimes to the millisecond or microsecond scale, and then as quickly as possible. We deliver those prices to the market. Now there's really 2 technology platforms that underpin this business. 2 platforms with very, very different workloads. First, there's a computational research platform that we use to develop these strategies and test them. And then there's. An ultra low latency platform that we use. To respond very quickly. Wireless and sellers and. Underlying both of these forms, there's a complex monitoring layer to make sure that those models are always performing safely and effectively.
Forrest Norrod
Great. Thanks for the overview. And now let's dive into each one of those tell tell us more about the research platform.
Jeff Maurone
Yes. So let's talk about research. Research for us means developing hypotheses about where market prices are going, expressing those. Hypotheses in code and then testing. Them, but here's. The catch testing for us means releasing those strategies in a simulation of the entire market and seeing how they perform under a variety of different environments and scenarios. And so the complete platform that we need.
To to do this demands enormous scalability and a real real focus on workload optimization. So as an example, all of this research which runs in the public cloud reaches a peak demand of about million concurrent cores and relies on a market data archive of nearly 100 petabytes. In late 2020, we transitioned. All of that workload to. AMD and saw a 35% improvement in research performance. And so then it took here is where ethics innovation, particularly in memory bandwidth, really unlocked. A different level of performance for our business. That's fantastic. That's an enormous problem with very impressive scale. There are not many workloads that require a million.
Forrest Norrod
And we're proud that you've trusted us. Then if it provides you that performance, now tell us about the training platform, because I think that's a little bit different story.
Jeff Maurone
Very different, very different. So here's where we're vastly different, and I would say the polar opposite. Many of the hyperscalers we heard from just a few minutes ago. So densification and virtualization are. Simply not welcome in this platform and in fact we have both invested massive resources internally and with AMD as our partner. To take microseconds, nanoseconds and soon ecosex.
Ends off our latency. And every one of the cores that run in this platform run in some of the most expensive data centre real estate in the world. Expensive because it is as physically close as possible to the centres of financial markets, right? So here is where our AMD partnership is all. If there is a packet of market data that. Is passing through that platform. It is guaranteed to go through. A solar flare net. And for the most latency sensitive strategies that we run, Xilinx FPGA's are absolutely essential, quite frankly. They bring to market strategies and models that otherwise would never see the light of day.
Forrest Norrod
You know Jeff. Citadel is securities is obviously a great example of the theme that we've been talking about, which is workload optimization and the need for workload optimised solutions, and you certainly have two very different problems.
Jeff Maurone
Absolutely. And in in, in research, we look very similar to a hyper scaler but in low latency trading we look about the. Office opposite and AMD has done an excellent job of understanding complex, multifaceted customers like Sino Securities. And I look forward. To continuing to innovate on products that make a difference for our business and improve how financial markets function.
Forrest Norrod
Well, thanks very much Jeff. Thanks for being here today. But more importantly, thanks to the trust. And partnership over the years.
Forrest Norrod
A million cores of EPYC CPUs in the cloud deliver optimised performance for the research simulations and trading strategies, and how alveo Fpgas deliver the ultimate? And performance for the trading and market making systems. But he also introduced you to another important aspect of AMB and the data centre which is our network portfolio for Citadel.
This means high performance solar flare NICs, giving them the ultimate and low latency and high performance to power millions of stock trades per day. In a highly competitive marketplace together computing and networking engines are becoming increasingly important to optimise and optimise together to deliver the performance, efficiency and agility needed in that optimised data centre. And so at any MD, we recognise that networking represented an increasing, increasingly critical element of the data centre infrastructure that needed to be optimised for data centre workloads and that's hence motivated our acquisition of consignment.
Because the complexity of the data centre. Continues to evolve. And the latest step with hybrid cloud and the extension of cloud computing out to the edge is an incredibly important model, but the channel. That come with that. Model are significant. Now. The first challenge is inherent to a cloud virtualization gave us higher utilisation and agility, but also introduced overhead to ensure that workloads. Were separated and secured.
For the network complexity is exploded as compute and storage systems spread across the data centre and applications no longer run on a single system. Managing these distributed resources is complicated, and securing the data centre is even harder with an expanding attack system and much more to monitor, and you've got to monitor, particularly without further taxing the systems. So today we have a very complicated cloud environment. The agility of the cloud is paid for with overhead in each server and typically separate appliances to provide infrastructure services. The CPU tax and many cloud deployments can be 30%. Or even more.
For the hypervisor and infrastructure software. Load balancing, security and other appliances, major cost and a pain. Point to manage. Some of The Pioneers in cloud computing, you heard from 1:00 earlier today recognise the complexity and overhead that was being introduced by the architecture and created the concept. Of the D. AMD and our consultant team evolved the concept of the DPU beyond the sea of cores to the world's most intelligent data flow processor. What I believe is the best networking team in the industry. The team we acquired with been subtle. Step back and took a fundamental look at the problem and created a purpose built architecture to be able to provide complex services including security, software defined networking and telemetry at line rate while simultaneously being programmable. To accommodate customers, differing needs and future group the architecture.
Now if we put this P4 DPU into each server, we can free the CPU from its overhead. With the DQ in each server, we offloaded the virtualization overhead and brought the infrastructure and security services to the point of use. Eliminated or dramatically reducing the need for set of external appliances and further improving the agility by simplifying manage. Network and resource management of all of the servers for uniform. And perhaps most importantly, dramatically improving the TCO of the cloud by freeing the CPU resources to do what is really needed run the workloads. Offloading that cloud and virtualization overhead is exactly why we first deployed our A&D pensando P4 GPU and smart knit, making it easy to add the DPU to each server and allowing us to tackle the overhead and freeing the server for productive use.
But they also dramatically improved security paradigm. Following such as firewall protection on all East West traffic distributed across distributed applications, encrypting all of that traffic, and finally providing telemetry. Without taxing the CPU, it allows security systems to. Have early warning. Of threats and anomalies in the network. I'm proud to say that indeed consigned to smart mix are deployed now in major clouds, and enterprises can get this solution today alongside A VMware vSphere project monitoring. The smart Nic has been a fantastic innovation, but as we work with data centre innovators across the industry, we've recognised that the value proposition of the DPU extends well beyond the smart.
The same DPU silicon and software platform is now. Deployed in the itself. Where many of the same performance, efficiency and agility benefits can be shared amongst a group of servers. And there is another bonus to this approach. This can just as easily be deployed into an existing infrastructure as well as designed into a new. Great phrase. Case in point for this is Microsoft's Azure accelerated Connexion services, which is in public preview today and is powered by our pensando P4 GPU's, providing dramatically higher Connexions per second, and network performance for customers embracing.
That capabilities and we're very excited about that. And so now we're bringing that same concept to the enterprise, the start switch developed with our partners at HP Aruba. It's an innovative switch built on the industry standard. Switching silicon and AMD T4 GPUs. Traffic flows through the Du's to deliver offload infrastructure and security services. That provide the data centre, not just better efficiency but enhanced security and observability. And those benefits can be realised well beyond the walls of the data centre into the myriad of edge computing applications that depend on security, performance and agility.
Deploying the same architecture and software platform at the Edge means that the endpoint has a common level of security and manageability with its counterpart servers back in the data centre and those Connexions between the data centre and the edge are as secure as possible. So putting it all together, you can deploy a few servers in retail location with smart mix providing security and manageability. Consistency everyone. With Sienna EPYC CPUs providing transformative energy efficiency, and. Performance and Lu influence accelerator cards powering latency sensitive AI applications to enhance the retail experience and security.
The full AMD portfolio can provide similar benefits to telco deployment. Smart city applications, amongst many others. And so I hope that in this first half of our presentation, we've given you insights into how AMD is helping our customers evolve their data centres. And make them more. Both in the cloud as well as in the enterprise with workload optimised solutions that address the most pressing problems for the hybrid cloud net.
Next, let me invite Lisa Sue back to the stage to discuss how we're helping our customers address their next evolution in the data centre, incorporating AI into all they do. Thank you.
AMD's announcements regarding AI are in this separate article.
Designed to revolutionise flagship smartphones with truly extraordinary experiences, such as ultra advanced AI, new Cognitive ISP powers for professional quality photos and videos, massive gaming enhancements including real-time hardware accelerated ray tracing, a world-first 5G AI processor, the only commercial WiFi 7 SoC, even more advanced spatial audio and more, launching in commercial devices before the end of 2022.
Betteridge's Law suggests headlines ending in a question can be answered by the word no, and while it is very tempting to just say "yes", shut Intel down so it can stop being embarrassed by AMD, competition is good, and it's great to see AMD kicking Intel's butt.
Apple's competitors are often forced to compete on price, because when it comes down to it, that's all they've got, but with the new 5G iPhone SE, the new M1-equipped 5G iPad Air, the new Mac Studio, the new 27-inch Studio display and the M1 Ultra processor, Apple boldly goes where no-one has gone before.
American efforts to curb chip sales in China have not succeeded, according to an analysis of data for 2020 by the Semiconductor Industry Association, a lobby group for the US semiconductor industry.
With Apple arguably the world's most advanced SoC maker for smartphones, tablets and computers, the ARM and x86 competition is in full flight for a big fight for the hearts, minds and wallets of consumers, with Qualcomm's new Snapdragon 8 Gen 1 promising to "lead the way into a new era of premium mobile technology equipped with cutting-edge 5G, AI, gaming, camera, and Wi-Fi and Bluetooth technologies to transform the next generation of flagship devices."
The new AMD Radeon RX 6000M Series mobile graphics includes the Radeon RX 6800M GPU, said to be AMD's fastest Radeon for laptops, delivering desktop-class performance enabling ultra-high frame rate 1440p gaming.
Promising the "world's best processor for thin-and-light Windows laptops just got better", Intel's two new 11th-gen Core processors feature Intel Iris Xe graphics, Intel Wi-Fi 6/6E (Gig+) and Intel 5G Solution 5000.
Graphics processing unit designer NVIDIA has announced that it will be reducing the ethereum hash rate on its GeForce RTX 3080, RTX 3070 and RTX 3060 Ti graphics cards which are due to be shipped later this month, to make them less attractive to cryptocurrency miners.
Intel's 14nm Rocket Lake-S processors, in an era where AMD is at 7nm and smartphone processors are at 5nm, are Intel's newest 11th-gen processors, led by its flagship Intel Core i9-11900K, with Intel not having released an i9 range until AMD's challenge a few years ago.
Everyone got a bit of what they wanted. No one got everything, that sounds like the basis for a good[…]
Is this article ironic?
The safest way not to get snared is to avoid anything financial on your devices plus do not participate in[…]
Who do we trust here? A professional cloud provider with many customers or a monopolistic ticketing agency that can never[…]
I knew this scam was full of shit because it didn't present any actual evidence of the supposed hacker having[…]