Send News. Want a reply? Read this. More in the FAQ.   News Forum - All Forums - Mobile - PDA - RSS Headlines  RSS Headlines   Twitter  Twitter
Customize
User Settings
Styles:
LAN Parties
Upcoming one-time events:

Regularly scheduled events

AMD to Buy ATI

AMD to buy ATI for about $5.4 billion is an example of the flood of news reports this morning confirming rumors that Advanced Micro Devices is buying ATI Technologies. The About AMD & ATI Page on the ATI Website (thanks Mike Martinez) has a preliminary outline of where this will lead, and here's the announcement:

NEW YORK (Reuters) - Advanced Micro Devices Inc (NYSE:AMD - news), the No. 2 supplier of computer processors, said on Monday it would acquire graphics chip maker ATI Technologies Inc. for $5.4 billion in cash and stock to expand its product mix and grow market share as it battles Intel Corp. (Nasdaq:INTC - news).

Under terms of the deal, AMD will acquire all of the outstanding common shares of ATI (Nasdaq:ATYT - news) (Toronto:ATY.TO - news) for $4.2 billion in cash and 57 million shares of AMD common stock, based on the number of shares of ATI common stock outstanding on July 21.

AMD said it would pay $20.47 for each ATI share. That marks a 24 percent premium over ATI's closing stock price of $16.56 on Nasdaq on Friday. The stock added another 7 percent to $17.68 in after-hours trading amid media reports of the expected deal.

The consideration for each outstanding share of ATI comprises $16.40 in cash and 0.2229 shares of AMD common stock, the companies said.

AMD, the No. 2 supplier of computer processors, said it expects to finance the cash portion of the transaction with a combination of cash and new debt.

AMD has obtained a $2.5 billion term loan commitment from Morgan Stanley Senior Funding which, together with combined existing cash, cash equivalents, and short-term investment balances of about $3 billion, provides full funding for deal, it said.

ATI said it has agreed to a termination fee of $162 million. The deal is subject to the approval of ATI shareholders and regulators in the United States and Canada.

View
61 Replies. 4 pages. Viewing page 1.
< Newer [ 1 2 3 4 ] Older >

61. Re: FoxConn Jul 25, 2006, 00:13 DrEvil
 
This is a more useful link: http://arstechnica.com/news.ars/post/20060724-7333.html

 
Reply Quote Edit Delete Report
 
60. Re: FoxConn Jul 24, 2006, 23:35 MacD
 
All graphics will be integrated into the CPU, where it can be done better and faster than on a GPU

You're gonna hafta make a few arguments to back up that claim, 'cause as far as I know, it's just plain not true. The whole graphical computation biz is based on a certain architecture; underlying physics/graphics is done using matrices and linear algebraic computations. That kind of stuff is done in a massively paralel fashion. Your cpu works in a different way; the underlying layout for the circuits on the chips make the cpu/gpu 'incompatible'.

Not only that, but integrating gpu in the cpu does something terrible for yields; two different complex architectures in one package is error prone, and if either the cpu or the gpu have too many errors, the whole combination-package must be thrown away. And that's VERY expensive. It's one of the main reasons why second level cache is so low.

So keeping the gpu and the cpu seperate is quite important, and not just for these two reasons.
The gpu/cpu combo only makes sense if you're going the massively multithreaded route, with a massive network of parralel gpu/cpu's working in combinationb with each other....which brings up hardware/programming issues all of it's own.

So, I've done my homework (most of it at uni), now you show yours; I'm quite curious to hear your arguments, and that ain't sarcasm.

 
Reply Quote Edit Delete Report
 
59. Re: Credit where credit is due Jul 24, 2006, 22:03 Zathrus
 
Oh god, the inquirer?

Having a speculative article published on that site is more damning than anything reasonable, intelligent, and informed people could ever say.

 
Reply Quote Edit Delete Report
 
58. Re: Credit where credit is due Jul 24, 2006, 21:56 nin
 


Oh god, the inquirer?

I thought it was well established they made up shit?




-----------------------------------------------------
GW: Nilaar Madalla, lvl 20 R/Mo / Tolyl Nor, lvl 20 E/Mo / Xylos Gath, lvl 16 W/Mo

No venomhed, you can't fuck me. Please stop asking.
 
http://www.nin.com/pub/tension/
Reply Quote Edit Delete Report
 
57. Credit where credit is due Jul 24, 2006, 21:42 zee
 
To read about the CPU-GPU integration speculations go here:

http://theinquirer.net/default.aspx?article=33219

 
Reply Quote Edit Delete Report
 
56. Re: Why? Jul 24, 2006, 20:21 skyguy
 
AMD can't build a chipset worth a damn to save their corporate soul. Anybody remember the 761/762 and what a raging pile of crap that was?

Hey! That's the chipset in my main workstation!

I don't use it for gaming, but the performance otherwise has held up great over the last few years. Its an SMP 762 chipset running Linux, I have both PCI busses stuffed full of peripherals without any real performance problems.


 
Avatar 22932
 
Reply Quote Edit Delete Report
 
55. Re: What this means. Jul 24, 2006, 19:48 Zathrus
 
And they cost easily 3x as much...

They cost 3x as much as what? Some fantasy GPU chip that doesn't even exist in the real world?

Gotcha.

It's about cost/performance. 3d cards (and even CPUs) are sorely under-utilized as is.

You're preaching to me about cost/performance? Are you really that clueless?

And no, 3D cards are not under utilized in the current generation. Not unless you run them at low (1024x768 or under) resolutions or don't take advantage of all the features of the cards/games. If you bother to look at current benchmarks you'll see that modern games (Oblivion, FEAR, etc) are, in fact, GPU limited. A few months ago this wasn't true, but developers ramped up the graphics quality and features. The next gen cards will come out and we'll go back to a surplus of power, and then developers will ramp it up again.

As for CPUs -- yes, they've been under utilized in games for years. But I'd kill for more CPU power on my work desktop. And on our servers. But CPUs are largely under-utilized because they're waiting on main memory for data. And yet you want to further slow main memory by increasing contention for it as a resource. Yeah... that's smart. Because we've all seen the absolutely stunning performance numbers given by integrated video chipsets with no memory in comparison to identical or nearly identical GPUs with dedicated memory.

Instead you make vague references about people not doing their research and post nothing of substance yourself.

You didn't do your research. And you want substance? Fine.

Caches do not help with lack of bandwidth -- they help for latency issues. In fact, caches actually harm you if you're starved for transfer rates (aka bandwidth) since they slow requests for data during a cache miss. Which is pretty much all you're going to havve when you're streaming huge amounts of data in and out of the core. Caches are not free. Of course, if you'd ever taken any computer architecture courses you'd know that.

That's why GPUs haven't bothered with caches and have gone for more memory with higher transfer rates. Because caches do absolutlely nothing for them. Unless you consider the onboard memory to be one huge cache (which it effectively is).

But I'm sure you already knew all of this, given how wonderfully insightful and courteous your comments have been.

 
Reply Quote Edit Delete Report
 
54. No subject Jul 24, 2006, 18:09 Dev
 
The GPU isn't going to be integrated into CPU for a very long time (way more than 3-5 years).

Aside from cost (most computers sold have integrated video in chipset, because the businesses and people buying an average computer arne't interested in paying more for a separate video card), there's other reasons.

The huge increase of transistors is a great point (modern GPUs have more transistors than modern CPUs).

The massive mismatch in core running speeds is another good point someone brought up.

There's also memory. Modern graphics cards are often bottlenecked by the memory bandwidth and access times. System ram is FAR FAR slower than graphics ram.

The people willing to pay the extra cost for this (and it will NOT be cheap) would be gamers. And guess what... gamers aren't going to want to accept a 50% reduction in game performance so it will work with system memory.

And the rest of the world isn't going to pay 5x the system memory price to equip entire systems with the current high speed memory in graphics cards (generally 2 or 3 gens ahead of system memory).

So I'm sorry, but your opinion of GPU's joining into CPU's just wont happen in a very long time until many of the above reasons have changed, way more than 3-5 years.

I can see integrated video (into chipset, not CPU) becoming more common among the low end computers, but it already is VERY common. In fact, I belive that ATI makes more money from integrated stuff than they do from thier retail graphics cards. Simply because of the sheer numbers they sell.

My 2 cents

 
Reply Quote Edit Delete Report
 
53. Re: ... Jul 24, 2006, 17:41 DrEvil
 
Not gunna happen.

 
Reply Quote Edit Delete Report
 
52. Re: So obvious it shouldn't need explain Jul 24, 2006, 17:38 MachineMk2
 
So, if I get a system that has a processor (or two or three) and a GPU built into it, with some sort of controller that optimizes the RAM so I need less of it...hmm...wait a minute...I think I have something just like that (or pretty darn similar) hooked up to my TV.

Does this mean that PS3 is a computer?

 
Reply Quote Edit Delete Report
 
51. So obvious it shouldn't need explaining Jul 24, 2006, 17:34 UnderLord
 
Unified memory is an obvious cost exercise because:-
Why do processors have several levels of on-chip cache?
Because main memory feches are too damned slow.
Let's share main memory with another processor for graphics that will help performance no end!
Claims of 'interleaving' fetches are BS too, before you bring up that old furfy, they still cost performance in line fetch latency.

 
Avatar 13987
 
Reply Quote Edit Delete Report
 
50. Re: ... Jul 24, 2006, 17:26 m00t
 
Maybe as an additional GPU but not as a replacement... it would limit upgradability; it would be like going back to soldering processors into their socket.

Depends on what you're using it for. For dedicated game systems (consoles)? It's great. For consumer desktops? Well, maybe not yet. Though as GPUs become more general in their processing, performance increases will come from optimized micro-code over mid-generational hardware changes. This will extend the life-span of the hardware.

Also, not having to carry around the daughter-card creates a huge savings in terms of redundant hardware. I mean, how much of your $600 video card is spent on RAM? Probably near or more than half of it. True, there is a premium on chips that have all pipelines functioning (lower end editions are often the high-end editions that have some of their pipelines disabled because they didn't pass QA), but G/DDR3 is not cheap by any means.

 
Reply Quote Edit Delete Report
 
49. Re: What this means Jul 24, 2006, 17:22 m00t
 
It's just a penny pinching solution, nothing more.

Is that to say that the original X-Box was not big enough for you?

I, for one, am tired of big clunky towers. But I like the customizability that I can get from them over a laptop.

Why do I have 2.5GB of RAM in my system when 512 of that is purely for video and is only fully utilized when playing games? Why can't I let my other apps use it?

A unified memory structure is the future of this cycle in hardware development. It reduces costs (significantly). It may temporarily complicate architectures as it bridges the generation gap between non-unified and unified, but the transition phase will not be a difficult one (from a consumer perspective).

I don't know about you, but I like to pay less for my hardware, not more.

 
Reply Quote Edit Delete Report
 
48. Re: What this means. Jul 24, 2006, 17:18 m00t
 
And they cost easily 3x as much...

This isn't about raw performance. Stop getting hung up on that. It's about cost/performance. 3d cards (and even CPUs) are sorely under-utilized as is. The next generations are going to be about shrinking the die and reducing power consumption over pure performance increases because we are begining to see serious limits. Most of the advances in performance have come not from raw increases in speed but through optimized processes (algorithms, additional hardware on die, chip mfg improvements, etc). This is where future performance increases lie, not e-penis "my mhz is bigger than yours" that gimps such as yourself can't get past.

Perhaps if you had read the whole post before responding with your ignorant bullshit you would have seen this:


One thing this means, though, is that bus speeds and bandwidth will need to grow to take back the performance loss of moving the GPU to a thinner bus and slower RAM.
Look for A(MD/TI) moving to larger caches...


But I can't expect fuckwits like you to grasp anything like that.

Instead you make vague references about people not doing their research and post nothing of substance yourself.

Die in a fire.

 
Reply Quote Edit Delete Report
 
47. What else it means Jul 24, 2006, 17:16 UnderLord
 
I forgot something else I meant to say,
We should not forget that ATI is Canadian, I would think those 2500 people are worrying about their jobs which is s sight more of a worry than how well your next gaming PC will perform.
The major question would be, will AMD move the work to the US or to India or China?
(Disclosure: Speaking as an Australian Banking Platform and Applications programmer whose job was sent to India by EDS)

 
Avatar 13987
 
Reply Quote Edit Delete Report
 
46. Re: What this means Jul 24, 2006, 17:07 UnderLord
 
All 'Unified memory' solutions sharing main memory with the GPu are a bloody disaster for both the CPU and GPU performance, and that includes consoles. It's just a penny pinching solution, nothing more.
Having said that, now that MS, Sony and Nintendo could get both GPU and CPU in a guaranteed interoperable form from the one company, what do you think of Nvidia and Intels chances in the next round of console development?
Also, ATI make a lot more chips than has been mentioned in these comments, look here - http://www.ati.com/companyinfo/about/index.html


 
Avatar 13987
 
Reply Quote Edit Delete Report
 
45. ... Jul 24, 2006, 17:05 theyarecomingforyou
 
Later, GPUs will likely be moved on-die with CPUs as a 3rd or 4th core as they will now be able to take advantage of the higher-density manufacturing techniques that CPU-makers generally pioneer.
Maybe as an additional GPU but not as a replacement... it would limit upgradability; it would be like going back to soldering processors into their socket.

- - - - - - - - - - - - - -
Founder of the "I Hate Smiley Fitz" society

Remember: Riley has autism. He has trouble communicating, and in an overstimulating
environment, he can get frightened and run away, leaving his parents frantic. - Auburn
 
Avatar 22891
 
SteamID: theyarecomingforyou
Reply Quote Edit Delete Report
 
44. Re: What this means. Jul 24, 2006, 16:32 Zathrus
 
You go out and buy an A(MD/TI) GPU and drop it in the cHT socket. Voila, instant 3D gaming

And watch the more expensive, full fledged boards stomp all over you because they have faster memory and a far, far faster memory bus.

Hypertransport, even the fastest version, tops out at 20.8 GBps. Current graphic cards top out at nearly 50 GBps.

I really wish people would do some damned research into the basics before posting shit like this.

 
Reply Quote Edit Delete Report
 
43. AMD slashes processor prices Jul 24, 2006, 16:27 Kxmode
 
Official!
http://www.amd.com/us-en/Corporate/VirtualPressRoom/0,,51_104_609,00.html

The rest...
http://news.google.com/nwshp?hl=en&tab=wn&ncl=http://www.eetimes.com/news/semi/showArticle.jhtml%3FarticleID%3D191000263

I wonder if this buyout serves two purposes. First, and in the immediate short-term, it's diverts attention away from steep processor pricing reductions. Second, and for the long term, helps AMD tap into the mobile CPU market. If both are the case this is a good move.



-----
latest track: http://www.kxmode.com/media/music/kxmode_-_asylum_-_05-10-2006.mp3
more free music: http://music.download.com/kxmode
This comment was edited on Jul 24, 16:29.
 
Avatar 18786
 
Reply Quote Edit Delete Report
 
42. What this means. Jul 24, 2006, 16:19 m00t
 
AMD will have a top of the line graphics chip that supports Coherent Hyper-Transport out the gate.

You will no longer have to buy video cards for AMD systems, you will buy GPUs and plug them directly in to the motherboard. It will share system RAM with the processor.

Imagine buying a board with 4(or more) sockets. You start with some basic on-board video technology, nothing fancy. Initially you just have one CPU (dual-core, whatever). Later you realize you miss your favorite 3d games. You go out and buy an A(MD/TI) GPU and drop it in the cHT socket. Voila, instant 3D gaming. It would probably be cheaper, too. No PCB, no extra RAM (though expect the "standard" amount of RAM on systems go to up to 2GB or higher.). A few years later you find the system getting a bit sluggish from the added Vista requirements that came with the newest "Service Pack". Simply slot in another CPU and GPU to "double" your performance.

This is what this means.

Once it is out the door and proven with ex-ATI GPUs, Physics co-processors, advanced 3d audio chips, or whatever specialty processor you can think of that will take advantage of this would be only as difficult as dropping the chip in. It gets the benefit of living very close to main memory on a fast BUS, no card overhead.

Another advantage of this is that we can start moving towards smaller form-factors for mainboards and systems overall.

Later, GPUs will likely be moved on-die with CPUs as a 3rd or 4th core as they will now be able to take advantage of the higher-density manufacturing techniques that CPU-makers generally pioneer.

One thing this means, though, is that bus speeds and bandwidth will need to grow to take back the performance loss of moving the GPU to a thinner bus and slower RAM.
Look for A(MD/TI) moving to larger caches...

 
Reply Quote Edit Delete Report
 
61 Replies. 4 pages. Viewing page 1.
< Newer [ 1 2 3 4 ] Older >


footer

.. .. ..

Blue's News logo