Vinay Sinha, company vice chairman – India Sales, AMD, instructed ET that greater than 6,000 engineers in India are intently looped into each facet of the info centre business and that the corporate has seen its business double prior to now two years.
The firm’s know-how spans synthetic intelligence (AI), healthcare, aerospace, automotive, gaming and leisure.
“Our India teams do a lot of work for server chip design as well as network equipment development. AMD’s operations in India are central to every major product and design offering and are expected to continue to be so,” stated Sinha.
He stated it’s important to have the chip design finished in India, with the huge engineering expertise pool within the nation.
The Xilinx and Pensando acquisitions final yr diversified AMD’s portfolio of central processing items (CPUs) and graphics processing items (GPUs), to incorporate system-on-chips, area programmable gate arrays and good community interface playing cards.
Discover the tales of your curiosity
“We can deliver differentiated intellectual properties and designs to become the innovation hub for our company especially around new technologies like artificial intelligence and machine learning,” stated Sinha. AMD recognises the significance of its engineering expertise because the spine of the corporate and is concentrated on retaining and hiring the correct expertise for continued development, he stated.
“The India engineering team has considerable ownership across silicon and software for our server product line, including fourth generation EPYC. In fact, Indian engineers have played a central role in every generation of the EPYC server processor series, and the teams in Bengaluru and Hyderabad were involved in building the processor from scratch,” stated Sinha.
India has a few fifth of the world’s chip design engineers.
AMD’s native staff is integral to its international analysis and growth workforce, stated Sinha. “India plays a big role in our development resources, both company-wide and for data centres,” he stated.
From a design perspective, the corporate’s philosophy just isn’t to take a look at India for value arbitrage, in keeping with Sinha. “We would like to do end-to-end design in India. We’ve invested for many years and so at some point to do end to end products in India is very much a possibility. And for that we require the manufacturing infrastructure as well,” he stated.
As chairperson of the 13-member Semicon India Future Skills Talent Committee, Jaya Jagdish, AMD India nation head had submitted a report back to the federal government on methods to strengthen the semicon expertise in India. This yr, the corporate shall be partnering with the federal government and All India Council for Technical Education to behave on the suggestions.
AMD can also be targeted on accelerating the deployment of AMD AI platforms at scale within the knowledge centre, led by the launch of its Instinct MI300 accelerators deliberate for later this yr, stated Sinha.
The firm is investing in improvements corresponding to superior packaging and 3D stacking, chiplet architectures, and AMD Instinct MI300X GPUs and MI300A APUs. “We are also leveraging our AI IP across our portfolio of products. For example, we have integrated our AI engines into our Ryzen 7040 series of CPUs and are the only vendor with this capability,” stated Sinha.
The firm believes that AI requires a number of engines and GPUs are important for the kind of generative AI workloads which are occurring at hyperscalers, he stated. “We have the capability to offer AI solutions from the cloud to the edge to the endpoints,” he stated.
The firm’s Instinct MI250 GPU is already displaying greater efficiency on giant language fashions than Nvidia’s A100 due to greater reminiscence capability and bandwidth, he stated.
“We also recently announced the AMD MI300X accelerator which stands out as model sizes continue to expand, yielding superior quality results. Offering 2.4 times more memory density and 1.6 times higher memory bandwidth than its competition, MI300X accommodates large language models like MetaAI’s OPT in a single GPU,” stated Sinha.
This permits prospects to maximise inference capabilities per GPU, per server and even per knowledge centre, he stated.
(The creator was in San Francisco for the AMD Data Center and AI Technology Premiere on the invitation of AMD)
Source: economictimes.indiatimes.com