Ads

Thứ Sáu, 21 tháng 11, 2008

AMD: Shanghai Processors Are In Full Production

Just a day before the fourth quarter of the year begins, Advanced Micro Devices reassured its customers and investors that it is on-track to deliver its highly-anticipated code-named Shanghai microprocessors for servers in Q4 2008, just as promised. According to the company, the new chips are already in production and revenue shipments will begin shortly.

“We’re in full production right now in the factory. People will start getting first silicon from the final production very shortly,” said Pat Patla, general manager of AMD’s server and workstation chip business, in an interview with Cnet News web-site.

AMD Opteron “Shanghai” microprocessors are made using 45nm process technology and feature enlarged 6MB level-three cache in addition to improved HyperTransport 3.0 bus. Besides, some other enhancements allow the new chips to offer higher instructions per clock (IPC) throughput compared to currently available AMD Phenom and AMD Opteron processors, which should transform into higher overall performance per clock.

It is absolutely crucial for AMD to start shipments of the new “Shanghai” processors on time and also to ensure that its performance and power consumption are competitive when compared to Intel Xeon central processing units.

But while the company seems to be on-track to initiate commercial shipments of its quad-core server chips made using 45nm process technology in Q4 2008, the chipmaker will only be able to release its desktop CPUs manufacturer using the same fabrication process in early 2009.

AMD to Release Quad-Core Processor for Notebooks in 2010.

Advanced Micro Devices will be more than a year after Intel Corp. with its quad-core microprocessors aimed at notebooks. The first quad-core chip for laptops from AMD will only emerge in 2010.

The first quad-core central processing unit (CPU) for notebooks in AMD’s line will be code-named “Champlain” and will be the base of the code-named “Danube” platform, AMD revealed at last week’s meeting with financial analysts. There are no details about the chip, but since the processor will only emerge sometime in 2010, it has good chances to be made using 32nm process technology.

Before the “Danube” platform emerges in 2010, AMD plans to release “Tigris” platform in 2009. The forthcoming platform will be based on dual-core code-named “Caspian” CPU manufacturing using 45nm fabrication process as well as next-generation RS880M+SB710 core-logic set.

It is interesting to note that both “Caspian” and “Champlain” microprocessors will be made in the same – socket s1 3rd generation – form-factor.

According to Jon Peddie Research analysts, desktop replacement notebooks for gamers are “showing strong gains”. Therefore, it is pretty regrettable that AMD decided not to introduce quad-core mobile microprocessors in 2009. Intel already has a quad-core processor for high-performance notebooks and the lineup is likely to expand next year.

While popularity of quad-core chips for mobile computers is unlikely to be high, the lack of appropriate option in AMD’s arsenal effectively means that the world’s No. 2 x86 chipmaker will not be able to compete against Intel in high-performance/desktop replacement laptop market segment

Thứ Bảy, 22 tháng 12, 2007

AMD Radeon HD 3800: ATI Strikes Back

Introduction
Things aren't looking too good for AMD. Up until now, its graphic card offerings were only worthwhile for two cards: the Radeon HD 2900 XT, performing better than the GeForce 8800 GTS 640 MB at a similar price (but at a noise and power consumption level much higher at peak) and maybe the Radeon HD 2600 XT, but only for Home Theater amateurs. Here may have been a big gap between those two cards, but their respective price points were almost coherent. The manufacturer was ready to fill it up with its Radeon HD 3850 and 3870, which only launches today. At least, this was true until, all of a sudden, its best friend knocked the air out of it by launching a card that surprised everyone, NVIDIA included: the GeForce 8800 GT 512 MB with a performance-price ratio that's actually exceptional.



Call of Duty 4
The situation thus becomes particularly ironic today because, AMD's very high end is thus beaten by a card sold at $230 . It's a situation that has a tendency to remind us of a time we thought was forgotten, that of the first Radeon. Yet, the rushed launch of GeForce 8800 GT is characterized by a more than problematic availability of those cards, and it's going to remain tight until January. Henceforth, what is AMD able to offer in this price range for the end of the year?


Direct3D 10.1: Incompatible?
With its new range of GPUs, the Radeon HD 3000, AMD is the first to support the next version of Direct3D: Direct3D 10.1. But what does this new revision of Microsoft's API has in store for us?

Incompatible?
When the first pieces of information on Direct3D 10.1 first leaked this summer, some websites echoed a troubling rumor; this new version would be incompatible with the previous one! Immediately, angry reactions were expressed throughout the web. As a matter of fact, Microsoft was reaping what it sowed with the buzz generated around Direct 3D. Indeed, gamers had had to accept that this version wouldn't be compatible with the previous ones and that it would specifically be linked to Redmond's latest OS: Vista. Microsoft had nevertheless promised that it was inevitable in order to guaranty a new future-proof API. And yet, a couple of months later, here are talks about a revision that dared be, once more, incompatible. For many, enough was enough.



Instancing 10: the demo of the Direct3D 10 SDK
However, as is often the case on the web, it all came to nothing, as Direct3D 10.1 is fully compatible with its predecessor. But let's dig deeper into what we mean when we talk about compatible versions of an API. Up until the ninth version, the various DirectX iterations followed one another and kept descending compatibility; when you installed a new DirectX version, you could play all of your older games that used previous versions. Similarly, it was possible for a game to create a DirectX 9 interface, but only use it as a DirectX 8 interface. Among other things, this allowed developers to only have to maintain one piece of code to support two kinds of cards; setting aside advanced features support for cards that truly handled DirectX 9. To do this, programmers had access to a structure that gave a detailed list of the card's real abilities. Inversely, this compatibility no longer exists in Direct3D 10. To ensure older games ran on Vista, Microsoft integrated both APIs in its latest OS.



Windows Vista APIs
In a similar fashion, a Direct3D 10 interface doesn't grant access to the ninth version APIs, as many were deleted. If a developer wishes a game to support Direct3D 9 and 10, it's compelled to plan for two distinct version of a game, which isn't really different from what he had to do if he had wanted to support OpenGL and Direct3D. We talk of incompatible APIs in this particular case.

Inversely, it's quite possible to create a Direct3D 10.1 interface on a card that's only Direct3D 10, the new API being a strict superset of the latter. Everything found in Direct3D 10 is also found in its big brother. The developer's only duty is to ensure that he doesn't call features only present in Direct3D 10.1 on a Direct3D 10 card, which was already a necessity with previous versions of the API.

Obviously, the already available Direct3D 10 GPUs (G8x, G9x and R6x0) don't support the latest API's add-ons, which seems to be a no brainer and yet this point has generated a lot of confusion. Actually, in regards to older GPU support, Microsoft had promised the death of Caps bits with Direct3D 10 and has kept its word... well, sort of; from now on, Caps bits no longer exist, but have been replaced by what Microsoft calls Feature Level. The main difference is that it's no longer necessary to ensure that each feature is individually supported; one needs only check if the feature level is Direct3D 10 or Direct3D 10.1, which is enough to determine precisely what is supported by the GPU.

Direct3D 10.1: What's New
Let's be clear right from the start; the new things brought by this new API aren't revolutionary. Direct3D 10 was a big makeover and as always with such endeavors, there are small errors. Thus Direct3D 10.1 must be seen as an incremental update, correcting, thanks to time and distance, small holes in the previous API, and bringing a few add-ons in order to erase some of the restrictions that still existed.

All the improvements may be summed up in three categories:

Stricter specifications in order to limit discrepancies between multiple implementations
A handful of new features
A clear focus on rendering quality and more precisely, antialiasing
Stricter Specifications
Microsoft has taken advantage of Direct3D 10.1 to make its API even more orthogonal by cancelling particular situations; hence it is now compulsory to support FP32 textures' filtering, while it was only optional in Direct3D 10 (though all Direct3D 10 GPUs from both manufacturers were already supporting it anyway). In a similar fashion, blending in 16 bits integer buffers is now obligatory when its implementation was only a choice with Direct3D 10.

Microsoft has also strengthened specifications with regard to computational precision, whether in blending or in-shaders operations. Thus, many operations (addition, subtraction, multiplication and division) are now in line with the IEEE-754 norm, which, one must admit, isn't really exciting for gamers, but will surely please researchers fond of GPGPU.

New Features
Microsoft managed to be reasonable when it came to the new API add-ons. Developers are still assimilating the new features brought by Direct3D 10 and figuring what they can really do with them. It was, therefore, out of question to drown them every year under the flow of new add-ons.

First of all, we find Cube Map Arrays. With Direct3D 10, Microsoft had introduced Texture Arrays, tables of textures that could be indexed directly in the shaders. At first, Texture Arrays resemble 3D textures, which have been around for a long time, but practically, their behavior is very different. Ergo, when accessing an element of a 3D texture, a filtering occurs between the different layers, which is normal as a 3D texture is voluminal. On the contrary, textures stocked in a table may not have any connection between them. Consequently there isn't any filtering between neighboring elements. Furthermore, when using mipmapping, a 3D texture is divided by 2 according to its 3 dimensions, which isn't the case with Texture Arrays; if the different textures composing it see their size decreasing, the size of the table remains the same.

Direct3D 10.1 generalizes those Texture Arrays by adding support for Cube Maps tables whereas, until now, only 1D and 2D texture tables were supported.



CubeMap arrays
Shader Core wise, Direct3D 10.1 introduces Shader Model 4.1 which brings a couple of new things like the Gather-4, which is another name for Fetch-4 (introduced with ATI's previous generation of cards). To quickly refresh your memory, this instruction allows retrieving 4 unfiltered elements of a single-channel texture with just one texture fetch, which permits a more efficient implementation of personalized filters in shaders.



Fetch4
Another instruction added to Shader Model 4.1 enables it to recuperate the level of detail (mipmap level) during a texture sampling. Microsoft has also upgraded certain limits, especially the number of vertex shaders' input and output elements as we go from 16 vectors of 128-bit (4 floating simple precision) to 32.



D3D 10.1 Pipeline
With regard to blending, we've already mentioned the new supported format: Int. 16, but it's not the only new thing; Direct3D 10.1 now enables specification of independent blending modes during a simultaneous rendering in more than one buffer (MRT: Multiple Render Targets).

Aiming At Quality
With Direct3D 10.1, Microsoft has focused on rendering quality more than any of the other new features, so to speak. And the main focal point was antialiasing. First news: from now on, the support of antialiasing 4x is compulsory for 32-bit (RGBA8) as well as 64-bit (RGBA16) buffers. Furthermore, samples' position is also specified by the API and must be configurable. Without going as far as the ability to freely program samples' position, an application must at least be able to choose between many predefined patterns.

Beyond more strictly defined specifications, Microsoft has also sought to rationalize a little antialiasing management by offering much more control to programmers and by resorting less to the GPU manufacturers' homemade recipes. One has to admit that until now, users had access to a number of options quite disconcerting to beginners: apart from antialiasing levels (2x, 4x, 8x), the user had access to transparency antialiasing to filter alpha textures either in multisampling or supersampling mode, and on top of that were added specific features from each Independent Hardware Vendor (IHV): CSAA or CFAA... With Direct3D 10.1, programmers can finally specify if they want multisampling or supersampling by primitive and he also has access to the coverage mask of each pixel, which grants him control on samples on which shaders are applied.



D3D 10.1 Antialiasing
Finally, whereas Direct3D 10 enabled the access to samples of a multisampled color buffer, it's now possible to do the same thing in a multisampled depth buffer.

Practically, most of those features aren't new. Each manufacturer more or less included them in its own way and allowed their activation in its drivers. What's really new is that Direct3D 10.1 finally allows all this to be opened to games' programmers. Henceforth, driver's programmers will no longer be in charge of developing new antialiasing modes but games' programmers will now handle it according to the specific needs of their engines, a little like what is already happening on consoles where programmers have access to a lower hardware level.

Microsoft therefore gives the best there is to developers while waiting for totally programmable ROP, which would make all this even more flexible and clearer.

And Practically?
Practically, don't hope for much in the meantime. We are still waiting for developers to master Direct3D 10 and for them not to be limited by the Direct3D 9 versions of their engines that they still must upgrade, so there's little chance that they'll run towards Direct3D 10.1; the hardware is barely out and the API won't be available until Vista's Service Pack 1 in 2008.

Nevertheless, some features should allow for interesting effects. Specifically, Cube Map Arrays could simplify dynamic reflections, even if one must not forget the impact on other portions of the pipeline. Actually, in today's games, dynamic reflections are usually only applied to main elements (and the frequency of the reflections' update is far less important than the screen's refresh rate) in order to save some fill rate. If Cube Map Arrays take away a restriction on the number of simultaneous reflection, it doesn't cancel the others. We'll thus wait to really appreciate it in games, rather than in a handful of demos formatted by AMD or Microsoft.

Independent blending modes for each buffer when using MRT should ease the development of deferred shading rendering engines. Combined with possibilities to read antialiasing samples of color and depth buffers, those engines won't be forced to abandon antialiasing for a vague blur that is of questionable interest.

The other new features bring more additional comfort to developers than they truly do to gamers.

Workstation-Shootout: ATi FireGL V7600 vs. Nvidia Quadro FX 4600

A Balance Of Power This Fall?


The graphics card market for the workstation segment used to move at its own, more leisurely pace - until now. Although the rule still applies that cards aimed at the professional market space only appear a few months after their gaming/mainstream counterparts, ATI is speeding things up a bit this time. The Canadian company has released no fewer than five cards based on chips belonging to the R600 series, creating a numerical balance of power with Nvidia's product portfolio. After all, Nvidia's professional product line based on the G80 chip also counts five members, as the following table shows.

Workstation Cards with Shader Model 4.0 Chips
ATi cards based on the R600 series Nvidia cards based on the G80 series
FireGL V8650 (R600)
FireGL V8600 (R600)
FireGL V7600 (R600)
FireGL V5600 (RV630)
FireGL V3600 (RV630)
Quadro FX 5600 (G80)
Quadro FX 4600 (G80)
Quadro FX 1700 (G84)
Quadro FX 570 (G84)
Quadro FX 370 (G84)

In this article, we're comparing ATI's FireGL V7600 ($1000 plus taxes) to Nvidia's Quadro FX 4600 (€1650 including tax). For reference, we're also including the results of last year's models, the FireGL V7300 (R520) and Quadro FX 4500 (G70).

OpenGL Workstation Graphics - Market, Audience And Features
Looking at the workstation section of Nvidia's website, buyers will find a large variety of products. Aficionados will also discover several inconsistencies, though. For example, in some cases, the same product is associated with several market segments in the whitepapers. Additionally, the site lacks any information that would help differentiate between the current product line and last year's models - the model numbers alone give no indication of the what performance class the card actually belongs to.

While ATI's product naming scheme is not much more helpful or informative, it helps that the company's website differentiates between the 2006 and 2007 model years. While we don't want to get ahead of ourselves, we'll say at this point that buying the 2007 model is the better choice, regardless of what company you opt for.

To alleviate the problem of the confusing numbering scheme, and to help you tell the newcomers from last year's models, we have created the following table. Here, we attempt to group the cards into performance classes based on their real-world performance.

Performance Classification for professional Workstation Graphics Cards
Market Segment Nvidia ATi
Ultra-High-End Quadro FX 5600 FireGL V8600/V8650
High-End Quadro FX 4600 FireGL V7600
Mid-Range Quadro FX 1700 (FX 4500*) FireGL V5600 / (V7300*)
Entry-Level Quadro FX 570 / FX 370 (FX 1500*) FireGL V3600

Key: * Graphics chip from last year's generation

Before we get to the tests themselves, let's recap the genealogy of the workstation cards. From a hardware perspective, professional cards are not really separately developed products. Instead, they are derivatives of mainstream and gaming cards, making them almost identical to their non-professional counterparts. However, as you probably know, mainstream cards are a lot less expensive.

Now, the resourceful buyer may be tempted to simply choose the cheaper alternative, but the graphics companies take steps to prevent this, by making small changes to the workstation cards' BIOSes and graphics chips. The drivers are then written so that a mainstream card only delivers very meager performance in workstation tasks. Thus, only a Quadro or FireGL card can come close to its theoretical maximum performance in OpenGL.

Workstation Cards and their Mainstream/Gaming-Equivalents
Workstation Model Based on Chip Fab Process Mainstream Equivalent Video Memory
ATi FireGL V7600 R600 80 nm Radeon HD 2900 512 MB GDDR3
ATi FireGL V7300 R520 90 nm Radeon X1800 512 MB GDDR3
Nvidia Quadro FX 4600 G80 90 nm GeForce 8800 768 MB GDDR3
Nvidia Quadro FX 4500 G70 110 nm GeForce 7800 512 MB GDDR3

In the past, clock speeds were a relatively good indicator of performance, but today, you should focus more on the chip's technological details. With current cards, clock speed comparisons are only valid across the same chip generation - if you compare different generations, the numbers may quickly mislead you. One important criterion should be the shader model supported by the card. Our recommendation is to choose a card using shader model 4.0.

DirectX and OpenGL used to be competing APIs for software developers. Although OpenGL still dominates the workstation segment, DirectX is gaining more and more support as well. For example, 3D Studio Max 9.0 is a typical representative of workstation software. The application gives the user the choice between DirectX and OpenGL, but to achieve optimal shader performance, Tom's hardware recommends using DirectX in this case. Other software is increasingly using this API. Moreover, even the SPEC website includes DirectX results in the reference scores.

Important Features at a Glance
Workstation GPU Memory Bandwidth DirectX OpenGL Shader Model Core Clock Memory Clock Engine
ATi FireGL V7600 51.0 GB/s 10 2.1 4.0 500 MHz 510 MHz 320 SPUs
ATi FireGL V7300 41.6 GB/s 9.0c 2.0 3.0 600 MHz 650 MHz 16 P / 8 V
Nvidia Quadro FX 4600 67.2 GB/s 10 2.1 4.0 500 MHz 700 MHz 112 SPUs
Nvidia Quadro FX 4500 33.6 GB/s 9.0c 2.0 3.0 430 MHz 525 MHz 24 P / 8 V

Key: SPUs = Stream Processing Units, P = Pixel Shader, V = Vertex Shader

ATI sends its workstation lineup into the market with bold claims. According to a press release, the new R600-based product line is meant to offer a 300% performance advantage over previous models. Of course, such claims will net the company the desired attention, but at the same time, they also inspire a certain level of skepticism: our first reaction was that ATI was confusing marketing with hyperbole. Nonetheless, if there is even a kernel of truth to these claims, the new cards must have a lot to offer that would be worth examining more closely.

In this comparison, we are limiting ourselves to the FireGL V7600, which comes with 512 MB of video memory and sells for a recommended price of $1000. ATI positions it in the high-end segment for CAD and DCC applications. Within the workstation family, it has two bigger siblings, namely the V8650 with 2 GB of memory and the V8600 with 1 GB. These two models are a good deal more expensive, and can only unleash their full potential in applications using massive textures and huge models or scenes. On the lower end, there are also two pared down versions: these are the V5600 with 512 MB of video RAM, and the V3600 with a meager 256 MB.

We are happy to report that all of ATI's cards finally sport two dual-link capable DVI video outputs, enabling the use of large wide-screen monitors. Each display can now have a maximum of 2560x1600 pixels, for a grand total of 5020 pixels across.

In its whitepapers, ATI deliberately avoids the use of the term "CrossFire", which hails from the mainstream segment. Instead, the company soberly speaks of "multiple card support". In plain English, the principle is the same, allowing two to four cards to be used in parallel to increase the processing performance. Nvidia calls its implementation of this technique "SLI".

Compared to the previous generation, the GPU architecture has fundamentally changed. For instance, with the new generation, separate pixel and vertex shaders are a thing of the past, and have been replaced by so-called "unified shaders". The advantage of this approach is that shader resources can be dynamically allocated depending on the application's current needs. If the task has a lot of geometry computations, the vertex shader capacity is increased, while the pixel shader power is upped for rendering tasks. This process is fully automatic.

One feature is especially interesting for medical applications such as X-rays / radiology. The display engine is able to handle 10 bits per color component (R, G and B), or well over one billion colors. The same goes for the black/white channel (think X-ray images), which supports up to 210 = 1024 shades of grey, rather than the standard 256.



8-pin Molex connector for auxiliary power on the FireGL V7600


V7600 Crossfire connector for use with two cards running in tandem.

New ATI HD 3800 To Support DX 10.1

HD 3800: First DX10.1, 55nm and Four-Way GPU


When the R600 graphics processor and the Radeon HD 2900 series launched, I stated that AMD had hardware that was more forward-looking than Nvidia's G80 technology. I still feel that way after looking at the latest information we obtained from AMD about RV670. On the same day Nvidia is launching its GeForce 8800 GT, Rick Bergman, Vice President of the Graphics Product Group at AMD disclosed some details about the Radeon HD 3800 series and beyond. However, he kept most of the juicy bits to himself pending product launch on November 15th. We do know that this launch will focus DX 10.1 hardware. Microsoft updated its software developers kit (SDK) in August and revealed some of the changes that would be taking place.

Due to the changes to 10.1, the RV670 graphics processor is not just a die shrink. Primarily this die shrink will be a 55 nm process. RV670 should take less silicon per wafer to produce than Nvidia's 8800GT meaning higher margins per part. AMD hinted but did not disclose that it should be able to beat Nvidia's thermal envelope especially at idle as it chose to implement some of its mobile technology into the desktop parts.

This is the sweet spot that was missing for almost a year. Only high, low and entry level cards have had a presence in the marketplace. PC Gamers were forced to spend above the traditional midrange price point for hardware that is clearly high end or purchase inferior performance DX10 hardware. The only card that came close was Nvidia's 320 MB model of the GeForce 8800GTS. Looking forward there will be at least three models (2 from AMD and 1 from Nvidia) that will service the "real" midrange. Traditionally midrange parts offered 75% of the performance of high end models at 50% or less of their price. The GeForce 8800GT and Radeon 3800 models should service this segment well with the new PCIe 2.0 interface.

Beyond DX 10.1 and a 55 nm process, users will be able to use more cards. Two, three and four-way CrossFire will be supported on Vista. Bergman also hinted at an asymmetric version of CrossFire. This means that cards of the same core but different memory and clock frequencies could be configured in CrossFire, stretching a consumer's dollar further. The Radeon HD 3800 series will also have an updated Universal Video Decoder (UVD) for the hardware acceleration of HD DVD and BluRay movies.

So, if the launch goes as planned, AMD will be able to claim three firsts: first to DX 10.1, first to 55nm and first to four way GPU performance on Vista.

There will be two versions of the Radeon HD 3800, with pricing (yet unconfirmed and subject to change) between $150-250 depending on model, clock frequency and memory configurations. These will be competitive with cards based on the technology Nvidia announced today. We wanted mid-range cards and now it appears we have them. The question that remains is "what does the change to the graphics component of DirectX in D3D 10.1 mean to consumers?" That is the real key to both launches.

ATI's Radeon 2600 XT Remixed

Introduction


When the Radeon 2600 XT was released, it was met with a lukewarm response from the PC community. Available in the $150 neighborhood when it was new, the 2600 XT GDDR3 was in Radeon X1950 PRO and GeForce 7900 GS territory - both of which are notably more powerful when it comes to gaming. The 2600 XT's gaming performance is comparable to that of the 7600 GT and X1650 XT, both of which could be found for under $125 at the time. And the higher-speed GDDR4 version of the 2600 XT was even more expensive, with little to show for the price increase in the way of extra performance.

On the positive side, the 2600 XT GDDR3 was going head-to-head with the GeForce 8600 GT. While both cards were priced a bit high considering their gaming performance, they are among the first mainstream video cards with DirectX 10 support and full HD video acceleration. These features appeal to people who are looking forward to HD video and DirectX 10 gaming.

As we approach the end of 2007, we can see the 2600 XT's pricing position has changed dramatically. Models can be found on Newegg for as low as $100 - which is even cheaper than the old-budget trench fighter, the 7600 GT.

But when you look closely at the low-priced 2600 XTs, you'll notice something a tad troubling: the memory speed on these cards is usually 700-MHz GDDR3. This is 100 MHz slower than the reference GDDR3 2600 XTs that were tested at the 2600 XT's launch. It also represents a more than 10% decrease in memory speed.

(To add to the confusion, Nvidia's partners have released DDR2 versions of the GeForce 8600 GT to the market. These cards have a huge 30% memory speed penalty compared to the reference 8600 GT. This has a significant impact on performance. Happily, true 8600 GTs with 800 MHz GDDR3 can still be had for as little as $115.)

So with all of this in mind, how does the new, cheaper and slower Radeon 2600 XT compare to the reference 8600 GT with fast GDDR3 memory? Is the new 2600 XT a great buy at $100, or is it a crippled part that smart buyers should avoid?

Let's have a look at two examples of the 2600 XT, examine their features and assess their gaming performance compared to their arch enemy, the 8600 GT GDDR3.

Thứ Tư, 19 tháng 12, 2007

AMD Expands Research and Development Operations in India.

Advanced Micro Devices, the world’s second largest maker of x86 microprocessors, has announced the opening of a new silicon design and platform research and development (R&D) facility in Bangalore, India. According to AMD, the new R&D center will allow to improve the company’s operations in the region. In addition, the new facility may enable the staff working on AMD’s 45nm chips with new opportunities.


As India’s role and importance in AMD’s global R&D network increases, the number of employees in Bangalore continues to grow, requiring a new facility that will accommodate the current team while also providing room for future growth. Employees will move into the new 52 000 square-foot center upon its completion and continue to focus on development of AMD’s most advanced, next-generation processing solutions.

Engineering staffs in Bangalore are playing the lead role on “Shanghai,” AMD’s first 45nm quad-core microprocessor, and are currently involved in design testing and optimization of the new chip. Prior to their efforts on “Shanghai,” teams were responsible for delivering key intellectual property (IP) for the first quad-core AMD Opteron microprocessor, previously codenamed “Barcelona”, AMD said.

AMD will continue operating its first facility in the city, using the existing office space for administration, sales and marketing staffs.

“Our engineering employees in India play a critical role in AMD’s global design network, and this new R&D center gives them the world-class equipment and resources they need to excel,” said Hector Ruiz, chief executive of AMD. “In AMD’s quest to become the technology partner of choice for the industry, this facility is vital to help us design and deliver industry-leading solutions specifically tailored to the needs of our customers in India, and for all our customers worldwide.”