Ads

Thứ Bảy, 22 tháng 12, 2007

AMD Radeon HD 3800: ATI Strikes Back

Introduction
Things aren't looking too good for AMD. Up until now, its graphic card offerings were only worthwhile for two cards: the Radeon HD 2900 XT, performing better than the GeForce 8800 GTS 640 MB at a similar price (but at a noise and power consumption level much higher at peak) and maybe the Radeon HD 2600 XT, but only for Home Theater amateurs. Here may have been a big gap between those two cards, but their respective price points were almost coherent. The manufacturer was ready to fill it up with its Radeon HD 3850 and 3870, which only launches today. At least, this was true until, all of a sudden, its best friend knocked the air out of it by launching a card that surprised everyone, NVIDIA included: the GeForce 8800 GT 512 MB with a performance-price ratio that's actually exceptional.



Call of Duty 4
The situation thus becomes particularly ironic today because, AMD's very high end is thus beaten by a card sold at $230 . It's a situation that has a tendency to remind us of a time we thought was forgotten, that of the first Radeon. Yet, the rushed launch of GeForce 8800 GT is characterized by a more than problematic availability of those cards, and it's going to remain tight until January. Henceforth, what is AMD able to offer in this price range for the end of the year?


Direct3D 10.1: Incompatible?
With its new range of GPUs, the Radeon HD 3000, AMD is the first to support the next version of Direct3D: Direct3D 10.1. But what does this new revision of Microsoft's API has in store for us?

Incompatible?
When the first pieces of information on Direct3D 10.1 first leaked this summer, some websites echoed a troubling rumor; this new version would be incompatible with the previous one! Immediately, angry reactions were expressed throughout the web. As a matter of fact, Microsoft was reaping what it sowed with the buzz generated around Direct 3D. Indeed, gamers had had to accept that this version wouldn't be compatible with the previous ones and that it would specifically be linked to Redmond's latest OS: Vista. Microsoft had nevertheless promised that it was inevitable in order to guaranty a new future-proof API. And yet, a couple of months later, here are talks about a revision that dared be, once more, incompatible. For many, enough was enough.



Instancing 10: the demo of the Direct3D 10 SDK
However, as is often the case on the web, it all came to nothing, as Direct3D 10.1 is fully compatible with its predecessor. But let's dig deeper into what we mean when we talk about compatible versions of an API. Up until the ninth version, the various DirectX iterations followed one another and kept descending compatibility; when you installed a new DirectX version, you could play all of your older games that used previous versions. Similarly, it was possible for a game to create a DirectX 9 interface, but only use it as a DirectX 8 interface. Among other things, this allowed developers to only have to maintain one piece of code to support two kinds of cards; setting aside advanced features support for cards that truly handled DirectX 9. To do this, programmers had access to a structure that gave a detailed list of the card's real abilities. Inversely, this compatibility no longer exists in Direct3D 10. To ensure older games ran on Vista, Microsoft integrated both APIs in its latest OS.



Windows Vista APIs
In a similar fashion, a Direct3D 10 interface doesn't grant access to the ninth version APIs, as many were deleted. If a developer wishes a game to support Direct3D 9 and 10, it's compelled to plan for two distinct version of a game, which isn't really different from what he had to do if he had wanted to support OpenGL and Direct3D. We talk of incompatible APIs in this particular case.

Inversely, it's quite possible to create a Direct3D 10.1 interface on a card that's only Direct3D 10, the new API being a strict superset of the latter. Everything found in Direct3D 10 is also found in its big brother. The developer's only duty is to ensure that he doesn't call features only present in Direct3D 10.1 on a Direct3D 10 card, which was already a necessity with previous versions of the API.

Obviously, the already available Direct3D 10 GPUs (G8x, G9x and R6x0) don't support the latest API's add-ons, which seems to be a no brainer and yet this point has generated a lot of confusion. Actually, in regards to older GPU support, Microsoft had promised the death of Caps bits with Direct3D 10 and has kept its word... well, sort of; from now on, Caps bits no longer exist, but have been replaced by what Microsoft calls Feature Level. The main difference is that it's no longer necessary to ensure that each feature is individually supported; one needs only check if the feature level is Direct3D 10 or Direct3D 10.1, which is enough to determine precisely what is supported by the GPU.

Direct3D 10.1: What's New
Let's be clear right from the start; the new things brought by this new API aren't revolutionary. Direct3D 10 was a big makeover and as always with such endeavors, there are small errors. Thus Direct3D 10.1 must be seen as an incremental update, correcting, thanks to time and distance, small holes in the previous API, and bringing a few add-ons in order to erase some of the restrictions that still existed.

All the improvements may be summed up in three categories:

Stricter specifications in order to limit discrepancies between multiple implementations
A handful of new features
A clear focus on rendering quality and more precisely, antialiasing
Stricter Specifications
Microsoft has taken advantage of Direct3D 10.1 to make its API even more orthogonal by cancelling particular situations; hence it is now compulsory to support FP32 textures' filtering, while it was only optional in Direct3D 10 (though all Direct3D 10 GPUs from both manufacturers were already supporting it anyway). In a similar fashion, blending in 16 bits integer buffers is now obligatory when its implementation was only a choice with Direct3D 10.

Microsoft has also strengthened specifications with regard to computational precision, whether in blending or in-shaders operations. Thus, many operations (addition, subtraction, multiplication and division) are now in line with the IEEE-754 norm, which, one must admit, isn't really exciting for gamers, but will surely please researchers fond of GPGPU.

New Features
Microsoft managed to be reasonable when it came to the new API add-ons. Developers are still assimilating the new features brought by Direct3D 10 and figuring what they can really do with them. It was, therefore, out of question to drown them every year under the flow of new add-ons.

First of all, we find Cube Map Arrays. With Direct3D 10, Microsoft had introduced Texture Arrays, tables of textures that could be indexed directly in the shaders. At first, Texture Arrays resemble 3D textures, which have been around for a long time, but practically, their behavior is very different. Ergo, when accessing an element of a 3D texture, a filtering occurs between the different layers, which is normal as a 3D texture is voluminal. On the contrary, textures stocked in a table may not have any connection between them. Consequently there isn't any filtering between neighboring elements. Furthermore, when using mipmapping, a 3D texture is divided by 2 according to its 3 dimensions, which isn't the case with Texture Arrays; if the different textures composing it see their size decreasing, the size of the table remains the same.

Direct3D 10.1 generalizes those Texture Arrays by adding support for Cube Maps tables whereas, until now, only 1D and 2D texture tables were supported.



CubeMap arrays
Shader Core wise, Direct3D 10.1 introduces Shader Model 4.1 which brings a couple of new things like the Gather-4, which is another name for Fetch-4 (introduced with ATI's previous generation of cards). To quickly refresh your memory, this instruction allows retrieving 4 unfiltered elements of a single-channel texture with just one texture fetch, which permits a more efficient implementation of personalized filters in shaders.



Fetch4
Another instruction added to Shader Model 4.1 enables it to recuperate the level of detail (mipmap level) during a texture sampling. Microsoft has also upgraded certain limits, especially the number of vertex shaders' input and output elements as we go from 16 vectors of 128-bit (4 floating simple precision) to 32.



D3D 10.1 Pipeline
With regard to blending, we've already mentioned the new supported format: Int. 16, but it's not the only new thing; Direct3D 10.1 now enables specification of independent blending modes during a simultaneous rendering in more than one buffer (MRT: Multiple Render Targets).

Aiming At Quality
With Direct3D 10.1, Microsoft has focused on rendering quality more than any of the other new features, so to speak. And the main focal point was antialiasing. First news: from now on, the support of antialiasing 4x is compulsory for 32-bit (RGBA8) as well as 64-bit (RGBA16) buffers. Furthermore, samples' position is also specified by the API and must be configurable. Without going as far as the ability to freely program samples' position, an application must at least be able to choose between many predefined patterns.

Beyond more strictly defined specifications, Microsoft has also sought to rationalize a little antialiasing management by offering much more control to programmers and by resorting less to the GPU manufacturers' homemade recipes. One has to admit that until now, users had access to a number of options quite disconcerting to beginners: apart from antialiasing levels (2x, 4x, 8x), the user had access to transparency antialiasing to filter alpha textures either in multisampling or supersampling mode, and on top of that were added specific features from each Independent Hardware Vendor (IHV): CSAA or CFAA... With Direct3D 10.1, programmers can finally specify if they want multisampling or supersampling by primitive and he also has access to the coverage mask of each pixel, which grants him control on samples on which shaders are applied.



D3D 10.1 Antialiasing
Finally, whereas Direct3D 10 enabled the access to samples of a multisampled color buffer, it's now possible to do the same thing in a multisampled depth buffer.

Practically, most of those features aren't new. Each manufacturer more or less included them in its own way and allowed their activation in its drivers. What's really new is that Direct3D 10.1 finally allows all this to be opened to games' programmers. Henceforth, driver's programmers will no longer be in charge of developing new antialiasing modes but games' programmers will now handle it according to the specific needs of their engines, a little like what is already happening on consoles where programmers have access to a lower hardware level.

Microsoft therefore gives the best there is to developers while waiting for totally programmable ROP, which would make all this even more flexible and clearer.

And Practically?
Practically, don't hope for much in the meantime. We are still waiting for developers to master Direct3D 10 and for them not to be limited by the Direct3D 9 versions of their engines that they still must upgrade, so there's little chance that they'll run towards Direct3D 10.1; the hardware is barely out and the API won't be available until Vista's Service Pack 1 in 2008.

Nevertheless, some features should allow for interesting effects. Specifically, Cube Map Arrays could simplify dynamic reflections, even if one must not forget the impact on other portions of the pipeline. Actually, in today's games, dynamic reflections are usually only applied to main elements (and the frequency of the reflections' update is far less important than the screen's refresh rate) in order to save some fill rate. If Cube Map Arrays take away a restriction on the number of simultaneous reflection, it doesn't cancel the others. We'll thus wait to really appreciate it in games, rather than in a handful of demos formatted by AMD or Microsoft.

Independent blending modes for each buffer when using MRT should ease the development of deferred shading rendering engines. Combined with possibilities to read antialiasing samples of color and depth buffers, those engines won't be forced to abandon antialiasing for a vague blur that is of questionable interest.

The other new features bring more additional comfort to developers than they truly do to gamers.

Workstation-Shootout: ATi FireGL V7600 vs. Nvidia Quadro FX 4600

A Balance Of Power This Fall?


The graphics card market for the workstation segment used to move at its own, more leisurely pace - until now. Although the rule still applies that cards aimed at the professional market space only appear a few months after their gaming/mainstream counterparts, ATI is speeding things up a bit this time. The Canadian company has released no fewer than five cards based on chips belonging to the R600 series, creating a numerical balance of power with Nvidia's product portfolio. After all, Nvidia's professional product line based on the G80 chip also counts five members, as the following table shows.

Workstation Cards with Shader Model 4.0 Chips
ATi cards based on the R600 series Nvidia cards based on the G80 series
FireGL V8650 (R600)
FireGL V8600 (R600)
FireGL V7600 (R600)
FireGL V5600 (RV630)
FireGL V3600 (RV630)
Quadro FX 5600 (G80)
Quadro FX 4600 (G80)
Quadro FX 1700 (G84)
Quadro FX 570 (G84)
Quadro FX 370 (G84)

In this article, we're comparing ATI's FireGL V7600 ($1000 plus taxes) to Nvidia's Quadro FX 4600 (€1650 including tax). For reference, we're also including the results of last year's models, the FireGL V7300 (R520) and Quadro FX 4500 (G70).

OpenGL Workstation Graphics - Market, Audience And Features
Looking at the workstation section of Nvidia's website, buyers will find a large variety of products. Aficionados will also discover several inconsistencies, though. For example, in some cases, the same product is associated with several market segments in the whitepapers. Additionally, the site lacks any information that would help differentiate between the current product line and last year's models - the model numbers alone give no indication of the what performance class the card actually belongs to.

While ATI's product naming scheme is not much more helpful or informative, it helps that the company's website differentiates between the 2006 and 2007 model years. While we don't want to get ahead of ourselves, we'll say at this point that buying the 2007 model is the better choice, regardless of what company you opt for.

To alleviate the problem of the confusing numbering scheme, and to help you tell the newcomers from last year's models, we have created the following table. Here, we attempt to group the cards into performance classes based on their real-world performance.

Performance Classification for professional Workstation Graphics Cards
Market Segment Nvidia ATi
Ultra-High-End Quadro FX 5600 FireGL V8600/V8650
High-End Quadro FX 4600 FireGL V7600
Mid-Range Quadro FX 1700 (FX 4500*) FireGL V5600 / (V7300*)
Entry-Level Quadro FX 570 / FX 370 (FX 1500*) FireGL V3600

Key: * Graphics chip from last year's generation

Before we get to the tests themselves, let's recap the genealogy of the workstation cards. From a hardware perspective, professional cards are not really separately developed products. Instead, they are derivatives of mainstream and gaming cards, making them almost identical to their non-professional counterparts. However, as you probably know, mainstream cards are a lot less expensive.

Now, the resourceful buyer may be tempted to simply choose the cheaper alternative, but the graphics companies take steps to prevent this, by making small changes to the workstation cards' BIOSes and graphics chips. The drivers are then written so that a mainstream card only delivers very meager performance in workstation tasks. Thus, only a Quadro or FireGL card can come close to its theoretical maximum performance in OpenGL.

Workstation Cards and their Mainstream/Gaming-Equivalents
Workstation Model Based on Chip Fab Process Mainstream Equivalent Video Memory
ATi FireGL V7600 R600 80 nm Radeon HD 2900 512 MB GDDR3
ATi FireGL V7300 R520 90 nm Radeon X1800 512 MB GDDR3
Nvidia Quadro FX 4600 G80 90 nm GeForce 8800 768 MB GDDR3
Nvidia Quadro FX 4500 G70 110 nm GeForce 7800 512 MB GDDR3

In the past, clock speeds were a relatively good indicator of performance, but today, you should focus more on the chip's technological details. With current cards, clock speed comparisons are only valid across the same chip generation - if you compare different generations, the numbers may quickly mislead you. One important criterion should be the shader model supported by the card. Our recommendation is to choose a card using shader model 4.0.

DirectX and OpenGL used to be competing APIs for software developers. Although OpenGL still dominates the workstation segment, DirectX is gaining more and more support as well. For example, 3D Studio Max 9.0 is a typical representative of workstation software. The application gives the user the choice between DirectX and OpenGL, but to achieve optimal shader performance, Tom's hardware recommends using DirectX in this case. Other software is increasingly using this API. Moreover, even the SPEC website includes DirectX results in the reference scores.

Important Features at a Glance
Workstation GPU Memory Bandwidth DirectX OpenGL Shader Model Core Clock Memory Clock Engine
ATi FireGL V7600 51.0 GB/s 10 2.1 4.0 500 MHz 510 MHz 320 SPUs
ATi FireGL V7300 41.6 GB/s 9.0c 2.0 3.0 600 MHz 650 MHz 16 P / 8 V
Nvidia Quadro FX 4600 67.2 GB/s 10 2.1 4.0 500 MHz 700 MHz 112 SPUs
Nvidia Quadro FX 4500 33.6 GB/s 9.0c 2.0 3.0 430 MHz 525 MHz 24 P / 8 V

Key: SPUs = Stream Processing Units, P = Pixel Shader, V = Vertex Shader

ATI sends its workstation lineup into the market with bold claims. According to a press release, the new R600-based product line is meant to offer a 300% performance advantage over previous models. Of course, such claims will net the company the desired attention, but at the same time, they also inspire a certain level of skepticism: our first reaction was that ATI was confusing marketing with hyperbole. Nonetheless, if there is even a kernel of truth to these claims, the new cards must have a lot to offer that would be worth examining more closely.

In this comparison, we are limiting ourselves to the FireGL V7600, which comes with 512 MB of video memory and sells for a recommended price of $1000. ATI positions it in the high-end segment for CAD and DCC applications. Within the workstation family, it has two bigger siblings, namely the V8650 with 2 GB of memory and the V8600 with 1 GB. These two models are a good deal more expensive, and can only unleash their full potential in applications using massive textures and huge models or scenes. On the lower end, there are also two pared down versions: these are the V5600 with 512 MB of video RAM, and the V3600 with a meager 256 MB.

We are happy to report that all of ATI's cards finally sport two dual-link capable DVI video outputs, enabling the use of large wide-screen monitors. Each display can now have a maximum of 2560x1600 pixels, for a grand total of 5020 pixels across.

In its whitepapers, ATI deliberately avoids the use of the term "CrossFire", which hails from the mainstream segment. Instead, the company soberly speaks of "multiple card support". In plain English, the principle is the same, allowing two to four cards to be used in parallel to increase the processing performance. Nvidia calls its implementation of this technique "SLI".

Compared to the previous generation, the GPU architecture has fundamentally changed. For instance, with the new generation, separate pixel and vertex shaders are a thing of the past, and have been replaced by so-called "unified shaders". The advantage of this approach is that shader resources can be dynamically allocated depending on the application's current needs. If the task has a lot of geometry computations, the vertex shader capacity is increased, while the pixel shader power is upped for rendering tasks. This process is fully automatic.

One feature is especially interesting for medical applications such as X-rays / radiology. The display engine is able to handle 10 bits per color component (R, G and B), or well over one billion colors. The same goes for the black/white channel (think X-ray images), which supports up to 210 = 1024 shades of grey, rather than the standard 256.



8-pin Molex connector for auxiliary power on the FireGL V7600


V7600 Crossfire connector for use with two cards running in tandem.

New ATI HD 3800 To Support DX 10.1

HD 3800: First DX10.1, 55nm and Four-Way GPU


When the R600 graphics processor and the Radeon HD 2900 series launched, I stated that AMD had hardware that was more forward-looking than Nvidia's G80 technology. I still feel that way after looking at the latest information we obtained from AMD about RV670. On the same day Nvidia is launching its GeForce 8800 GT, Rick Bergman, Vice President of the Graphics Product Group at AMD disclosed some details about the Radeon HD 3800 series and beyond. However, he kept most of the juicy bits to himself pending product launch on November 15th. We do know that this launch will focus DX 10.1 hardware. Microsoft updated its software developers kit (SDK) in August and revealed some of the changes that would be taking place.

Due to the changes to 10.1, the RV670 graphics processor is not just a die shrink. Primarily this die shrink will be a 55 nm process. RV670 should take less silicon per wafer to produce than Nvidia's 8800GT meaning higher margins per part. AMD hinted but did not disclose that it should be able to beat Nvidia's thermal envelope especially at idle as it chose to implement some of its mobile technology into the desktop parts.

This is the sweet spot that was missing for almost a year. Only high, low and entry level cards have had a presence in the marketplace. PC Gamers were forced to spend above the traditional midrange price point for hardware that is clearly high end or purchase inferior performance DX10 hardware. The only card that came close was Nvidia's 320 MB model of the GeForce 8800GTS. Looking forward there will be at least three models (2 from AMD and 1 from Nvidia) that will service the "real" midrange. Traditionally midrange parts offered 75% of the performance of high end models at 50% or less of their price. The GeForce 8800GT and Radeon 3800 models should service this segment well with the new PCIe 2.0 interface.

Beyond DX 10.1 and a 55 nm process, users will be able to use more cards. Two, three and four-way CrossFire will be supported on Vista. Bergman also hinted at an asymmetric version of CrossFire. This means that cards of the same core but different memory and clock frequencies could be configured in CrossFire, stretching a consumer's dollar further. The Radeon HD 3800 series will also have an updated Universal Video Decoder (UVD) for the hardware acceleration of HD DVD and BluRay movies.

So, if the launch goes as planned, AMD will be able to claim three firsts: first to DX 10.1, first to 55nm and first to four way GPU performance on Vista.

There will be two versions of the Radeon HD 3800, with pricing (yet unconfirmed and subject to change) between $150-250 depending on model, clock frequency and memory configurations. These will be competitive with cards based on the technology Nvidia announced today. We wanted mid-range cards and now it appears we have them. The question that remains is "what does the change to the graphics component of DirectX in D3D 10.1 mean to consumers?" That is the real key to both launches.

ATI's Radeon 2600 XT Remixed

Introduction


When the Radeon 2600 XT was released, it was met with a lukewarm response from the PC community. Available in the $150 neighborhood when it was new, the 2600 XT GDDR3 was in Radeon X1950 PRO and GeForce 7900 GS territory - both of which are notably more powerful when it comes to gaming. The 2600 XT's gaming performance is comparable to that of the 7600 GT and X1650 XT, both of which could be found for under $125 at the time. And the higher-speed GDDR4 version of the 2600 XT was even more expensive, with little to show for the price increase in the way of extra performance.

On the positive side, the 2600 XT GDDR3 was going head-to-head with the GeForce 8600 GT. While both cards were priced a bit high considering their gaming performance, they are among the first mainstream video cards with DirectX 10 support and full HD video acceleration. These features appeal to people who are looking forward to HD video and DirectX 10 gaming.

As we approach the end of 2007, we can see the 2600 XT's pricing position has changed dramatically. Models can be found on Newegg for as low as $100 - which is even cheaper than the old-budget trench fighter, the 7600 GT.

But when you look closely at the low-priced 2600 XTs, you'll notice something a tad troubling: the memory speed on these cards is usually 700-MHz GDDR3. This is 100 MHz slower than the reference GDDR3 2600 XTs that were tested at the 2600 XT's launch. It also represents a more than 10% decrease in memory speed.

(To add to the confusion, Nvidia's partners have released DDR2 versions of the GeForce 8600 GT to the market. These cards have a huge 30% memory speed penalty compared to the reference 8600 GT. This has a significant impact on performance. Happily, true 8600 GTs with 800 MHz GDDR3 can still be had for as little as $115.)

So with all of this in mind, how does the new, cheaper and slower Radeon 2600 XT compare to the reference 8600 GT with fast GDDR3 memory? Is the new 2600 XT a great buy at $100, or is it a crippled part that smart buyers should avoid?

Let's have a look at two examples of the 2600 XT, examine their features and assess their gaming performance compared to their arch enemy, the 8600 GT GDDR3.

Thứ Tư, 19 tháng 12, 2007

AMD Expands Research and Development Operations in India.

Advanced Micro Devices, the world’s second largest maker of x86 microprocessors, has announced the opening of a new silicon design and platform research and development (R&D) facility in Bangalore, India. According to AMD, the new R&D center will allow to improve the company’s operations in the region. In addition, the new facility may enable the staff working on AMD’s 45nm chips with new opportunities.


As India’s role and importance in AMD’s global R&D network increases, the number of employees in Bangalore continues to grow, requiring a new facility that will accommodate the current team while also providing room for future growth. Employees will move into the new 52 000 square-foot center upon its completion and continue to focus on development of AMD’s most advanced, next-generation processing solutions.

Engineering staffs in Bangalore are playing the lead role on “Shanghai,” AMD’s first 45nm quad-core microprocessor, and are currently involved in design testing and optimization of the new chip. Prior to their efforts on “Shanghai,” teams were responsible for delivering key intellectual property (IP) for the first quad-core AMD Opteron microprocessor, previously codenamed “Barcelona”, AMD said.

AMD will continue operating its first facility in the city, using the existing office space for administration, sales and marketing staffs.

“Our engineering employees in India play a critical role in AMD’s global design network, and this new R&D center gives them the world-class equipment and resources they need to excel,” said Hector Ruiz, chief executive of AMD. “In AMD’s quest to become the technology partner of choice for the industry, this facility is vital to help us design and deliver industry-leading solutions specifically tailored to the needs of our customers in India, and for all our customers worldwide.”

Thứ Hai, 17 tháng 12, 2007

AMD Claims First “Swift” Fusion Processor Due in Second Half 2009

Advanced Micro Devices was once again unsure when exactly it is capable of releasing its highly-anticipated code-named Fusion processor during its meeting with financial analysts on Thursday. Based on the current indications made by the world’s second largest x86 chipmaker, the products, which combine general purpose as well as graphics cores, will be delayed to the second half of 2009.


The concept chip that combines general purpose as well as graphics computing capabilities, which is usually named Fusion, is now called Accelerated Processing Unit (APU), according to a presentation of Mario Rivas, executive vice president of computing solutions group at AMD.
“I am happy to announce the birth of a new category, the Accelerated Processing Units. The ‘new AMD’ now has access to excellent IP on CPUs, excellent IP on graphics processing units and second to none chipsets. The integration of all these parts and our uniqueness – customer centric innovation – create the APU,” said Mr. Rivas.


The first APU, that is “on track to market in 2H 2009” is code-named Swift and features two or three general purpose x86 cores based on AMD’s new-generation micro-architecture (the same that is used in Phenom processors), graphics core based on “existing high-end discrete” design (possibly, ATI Radeon HD 3800), DDR3 memory controller as well as PCI Express bus controller. The chip will be made using 45nm process technology.


“The first APU platform is code-named Swift. It gives you the choice of technologies for high-confidence volume production ramp. We want to re-use as much [IP] as possible to accelerate our quality [qualification] and time to market. So, we have an AMD Stars CPU core, the graphics core that is based on the present high-end discrete GPU core and leverages the North Bridge that is presently found in Griffin, the CPU of the Puma platform. It will be our second 45nm generation product, so the maturity of the [production technology] will be proven. It is done on the current SOI design rules, which is the process that we know how to build on very well,” Mr. Rivas explained.


Initially the company indicated that Fusion processors “are expected in late 2008/early 2009”, and the company anticipated to use them within all of the chipmaker’s “priority computing categories”, including laptops, desktops, workstations and servers, as well as in “consumer electronics and solutions tailored for the unique needs of emerging markets”. A little later the company said that the first-generation of Fusion chips will be aimed at laptops and that production will start in early 2009. This time AMD claims that the actual chips will reach the market only in the second half of 2009, which may mean that the product will only be launched commercially in Q4 2009. Still, the company said that it is minimizing all the risks hopes to really deliver the product on time.


“By optimizing the choice of IP blocks we have less risks and faster time to market in the second half of 2009,” claimed , executive vice president of computing solutions group at AMD.

Chủ Nhật, 16 tháng 12, 2007

AMD: Survival and the Future

Since its acquisition of ATI, AMD has developed into a platform vendor. In addition to CPUs, the company also offers chipsets and graphics processors. Phenom, the company's first quad-core CPU for the desktop segment, was finally released after many delays only recently.

AMD is also a company fighting to survive. Indeed, the situation is so dire that the German state of Saxony, where AMD's Fab 36 is located, as well as the German government, felt the company needed financial support and contributed over $384 million to the firm's coffers.

Meanwhile, AMD's main rival Intel finds itself in a completely different situation. For years now, Intel has been subsidizing the PC market with so-called advertising-costs subsidies, and it's not only the large retail chains that benefit from this support. In addition to the server space, AMD is now also concentrating on the mass market. The downside is that margins are the lowest in this segment. If AMD floundered, the consequences for the market would be dire. Effectively, there would be no competition without AMD, leaving us with a monopoly in the form of Intel, with everyone from OEMs to end users at the mercy of one company's pricing politics.

Toms's: Mister Polster, thanks to its acquisition of ATI last year, AMD is now in a position similar to that of your competition. Shouldn't this fact allow your company to gain new customers, since you can now act as a single source for a computer's central components? After all, AMD also now produces chipsets and graphics processors in addition to CPUs. The word "platform" comes to mind.

J. Polster: We are positioned even better than the competition. Compared to AMD's previous company structure, we now possess our own platform, which comprises the chipset and the graphics solution as well as the processor. To my knowledge, the competition does not offer graphics chips.

Toms's: In your opinion, will AMD be able to survive without support on a national and a European level?

J. Polster: Yes, of course. No company can survive on financial benefits alone. We make very large investments that have to pay off over time. Within the entire IT sector, processor and chip makers are the ones that bear the largest risks.

Toms's: Is life for you as a CEO becoming more uncomfortable right now?
J. Polster: Well, I've been with AMD for quite a while. And since you asked, it has never exactly been cozy. In our business, you pull no punches, and quarter is not given.

Toms's: Will AMD be able to retain its independence?

J. Polster: I assume you're referring to the 8% investment by Mubdala Development? This is a clear case of an investor that is expecting a certain return on its investment. To us, it's more a sign of confidence that the company acquired shares. Also, this transaction does not constitute a majority investment or even a take-over, meaning it didn't have to be reviewed by the U.S. Committee on Foreign Investment.

Toms's: Were you surprised by the performance figures of the new Phenom processor?

J. Polster: No. I was surprised by its mainstream performance. We offer a strong and solid platform, which we have been able to realize as a result of our acquisition of ATI. I will admit that we're currently not quite at the top in the high-end market.

Toms's: What does the new strategy for desktop CPUs look like? Will AMD be concentrating on the lower- and-middle price segment, in effect ceding the high-end market to the competition?

J. Polster: We are concentrating on markets where we can achieve large volumes. And that happens to be the mass market, where we can cater to the individual segments. We never intended to start any kind of x-core war.

Toms's: In several online stores, the less expensive AMD Phenom is outselling comparable CPUs the competition offers. Would you say the Phenom is enjoying a good reception in the market?

J. Polster: Yes, Phenom is very well received. We are continuing our strategy of being a little more affordable than the competition. With the Phenom CPU, we are offering products at a very attractive price.

Toms's: Why did AMD drop the original X4 (and X3) designators from the Phenom product name at the last minute?

J. Polster: You see, we never intended to start any kind of x-core war that could have been reduced to a certain naming convention. You know, like "who has the most cores." From the perspective of the buyers and the users, processor designations are playing less and less of a role. After all, nowadays most people don't even know what kind of a processor belongs to a certain designation.

Toms's: Why are the new CPUs shipping at such comparatively low clock speeds?

J. Polster: As I said before, that is directly related to the market segments we are addressing. The mainstream segment is where the bulk of the quad-core CPUs are sold. There, we are ideally positioned with the Phenom models currently in the market. Of course we will still be releasing versions of the Phenom running at higher clock speeds.

AMD Projects Impairment Charge Due to Weak Graphics, Multimedia Business Performance.

Just one day ahead of its meeting with financial analysts Advanced Micro Devices said that the value it paid for graphics and multimedia chip designer ATI Technologies last year was too high and the actual revenues the company gets and will be able to obtain from its graphics and multimedia businesses are below expectations.

AMD: ATI Is Guilty of Everything

According to a statement with the U.S. Security and Exchange Commission (SEC), AMD concluded that the current carrying value of its goodwill which it had recorded as a result of its October 2006 acquisition of ATI Technologies was impaired. The write down will allow AMD to easily explain financial analysts why its current market capitalization ($4.97 billion at press time) is below the price of ATI it paid last year as well as poor financial results and losses with problems that allegedly existed at ATI before the merge.

“The acquisition took place at the moment, when ATI was not really leading in terms of technology… It is not like we acquired ATI and we lost market share. It was just a consequence of [ATI’s execution]: Nvidia had better graphics than ATI back then and that is why AMD lost market share. We also should consider [hardware] cycles of OEMs: if you are missing their cycles, you are out for a while. You have to be [ready] with the right [product] part at the right moment to get a cycle. When you are in, you are going to stay in for a long time. So, the main reason behind the share loss is missing OEM cycles. But we are regaining them now,” recently said Vincenzo Pistillo, director of consumer business development in EMEA region for AMD.

“This conclusion was reached based on the results of an updated long-term financial outlook for the businesses of the former ATI Technologies as part of AMD’s strategic planning cycle conducted annually during the company’s fourth quarter and based on the preliminary findings of the company’s annual goodwill impairment testing that commenced in the beginning of October 2007,” a statement concerning the write-down reads.

AMD’s Graphics Product Group Performs Below Expectations

Formally speaking, ATI Radeon X1900-series graphics cards were considerably more advanced than Nvidia’s GeForce 7900-series offerings at the time when AMD acquired the Canada-based graphics chip developer. But ATI, which at the time was already AMD’s graphics product group, could not launch its DirectX 10-supporting high-end offering last November and then was also late with mainstream DX10 graphics products.

But market share declines emerged not because ATI could not offer a competitor to Nvidia’s GeForce 8800 GTX, which still retails for $549 and higher. Just after AMD and ATI announced the transaction, partners of Intel and ATI annuled orders onto ATI Radeon Xpress-branded chipsets for Intel processors, which dramatically lowered ATI’s market share from 27.6% to 20.3%, according to Jon Peddie Research (JPR) data. However, already in Q4 2006 the market share of AMD’s graphics product group rebound to 23%, only to gradually decline to 19.1% in Q3 2007.

Clearly, the loss of Intel-compatible chipset sales as well as overall graphics adapter market share rather negatively affected sales of graphics products at AMD. If back in Q3 FY2006* ATI earned $325 million on its desktop and mobile discrete graphics products, then in Q3 FY2007* graphics product group of AMD only reported $252 million in revenue (only a bit higher than $228.3 million that ATI used to earn on desktop standalone products only).

It should be noted that the third calendar quarter is usually the strongest quarter in terms of volume during the year and AMD publicly stated that despite of the new product launch it does not expect graphics revenue increase in Q4 FY2007. If similar calendar periods are compared, then the picture would be even worse for AMD, as in Q2 FY2007 (which ends on June 30) its graphics product group earned $195 million, down 40% from ATI’s discrete graphics revenue in Q3 FY2006.

Consumer Electronics Sales of Former ATI Dip FurtherGraphics product group of AMD evidently experienced a number of issues with transitioning to DirectX 10 architecture and its relatively weak business performance, perhaps, could be explained with technology-related issues. But is it only the division of former ATI, which performance leaves much to be desired? It seems that not really.

AMD's third quarter consumer electronics (CE) segment revenue was $97 million. Under consumer electronics AMD understands sales of chips for handhelds, TV-sets, royalties from video game console manufacturers and, quite possibly, nonrecurring engineering (NRE) works that the company’s specialists may do in one of those segments.

ATI earned $150 million ($145 million without NRE) back in its Q3 FY2006 on handheld/DTV processors, NRE and royalties. It should be noted that back in Spring ’06 there was no massively successful Nintendo Wii game console on the market. Microsoft Xbox 360 also hardly brought a lot to ATI due to weak seasonality for game consoles as manufacturers start to ramp up production of game systems for holiday season in Summer.

Even if AMD got $20 million in royalties for Wii and Xbox 360 (which is a huge underestimation) in a seasonally strong quarter that ends on the 30th of September, it means that actual sales of CE products declined nearly two times since ATI’s days, from $135 million in seasonally weak quarter to $77 million in seasonally strong quarter.

Microprocessor Sales Down, Chipset Revenues ShadyDue to the fact that central processing units (CPUs) from Intel Corp. have performance advantage over CPUs by AMD, sales of microprocessors seem to be considerably down compared to the previous year.

Back in Q3 FY2006 the world’s second largest maker of x86 chips earned $1.33 billion on its computing products (CPUs only at the time), whereas in Q3 FY2007 it reported computing solutions group revenue of $1.283 billion (which now includes sales of both CPUs and chipsets).
ATI earned nearly $170 million on mobile and desktop core-logic sets for AMD and Intel processors in Q3 FY2006, but since the lion’s share of those earnings most likely came from Intel-compatible chipsets, this number can hardly be compared to anything now. Unlike CPUs, chipsets cost about $25 - $30 in average in the best-case scenario and given the current position of AMD processors on the market, AMD probably had to concentrate on lower-end solutions.
Anyway, after the acquisition by AMD the former ATI cannot sell any significant amount of chipsets compatible with Intel processors. Therefore, its maximum chipset market share will be equal to AMD’s processors, whereas its realistic market share may be even lower, as there are still Nvidia, SiS and Via on the market.

AMD to Write Down ATI Acquisition

While it is evident that business of former ATI has been harmed considerably in the most recent seventeen months, AMD insists that the explanation of dramatic revenue decline is ATI’s issues with execution that were left unnoticed by AMD during the acquisition process going on for nearly a year: starting from December ’05 and closing in October ‘06. It is interesting to note that without former ATI earnings of AMD in the most recent quarter could be as low as $1.2 billion (thanks to later-than-expected quad-core chip launch in September), instead of $1.632 billion.

Currently the chipmaker has no idea how much they overpaid for ATI Technologies. But the acknowledgement of the fact that ATI’s business may bring less revenue than expected a while ago may pursue a number of different goals, including the one to focus analysts' and investors' attention on certain aspects of AMD’s business instead of attracting it to AMD’s business in general.

“The company expects that the impairment charge will be material, but the company has determined that, as of the time of this filing, it is unable in good faith to make a determination of an estimate of the amount or range of amounts of the impairment charge. […] In any event within 4 business days after it makes a determination of such an estimate or range of estimates,” the statement by AMD reads.

*In this news-story we compare data between AMD’s Q3 of fiscal 2007 (which ended on September 30, 2007) and ATI’s Q3 of fiscal 2007 (which ended May 31, 2006).